From: service@paypal.com" ----19823882372248713622 Content-Type: text/html; Content-Transfer-Encoding: quoted-printable Dear PayPal

 

         = ;            &= nbsp;    

 

 

         = ;            &= nbsp;           &nb= sp; Dear PayPal =AE valued member,

         = ;      Is has come to our attention that you= r PayPal Billing Information records are out of date. That requires you to update the Billi= ng Information.
         &n= bsp;       Failure to update your records wi= ll result in account termination. Please update your records in maximum 72= hours. Once you have updated them, your PayPal session will not be interrupted and will continue as normal. Failur= e to update them will result in cancellation of service, Terms of Service (TOS)= violations or future billing problems. Please follow the link below and up= date your account information:

https://www.paypal.com/cgi-bin/webscr?cmd=3D_login-run

.........................................................................= .........................................................................= ...............................................

Thank you for using PayPal!
The PayPal Team
=

Your monthly account statement is available anytime; just= log in into your account at https://www.paypal.com/us/HISTORY. To correct any errors, please conta= ct us through our Help Center at https://www.paypal.com/us/HELP.

--------------------------------------------------= --------------------------------------------------------------------------= ---------------------------------------
FOR INTERNATIO= NAL PAYMENTS ONLY:
Commissions and Fees incurred by sender: $0.00

Rate of exchange: If and when the Recei= pt chooses to withdraw these funds from the PayPal System, and if the withdrawal involves a curre= ncy conversion, the Recipient will convert the funds at the aplicable currency= exchange rate at the time of the withdrawal, and the Recipient may incur a= transaction fee.

RIGHT TO REFUND
You, the customer, = are entitled to a refund of the money to be transmitted as a result of this agreement if PayPal does n= ot forward the money received from you in 10 days of the date of its receipt,= or does not give instructions commiting an equivalent amount of money to the = person designated by you within 10 days of the date of the receipt of the funds f= rom you unless otherwise instructed by you.

If you want a refund, you must mail or deliver your writt= en request to PayPal at P.O. Box 45950, Omaha, NE 68145-0950. If you do not r= eceive your refund, you may be entitled to your money back plus penalty of up to = $1.000,00 USD and attorney's fee pursuant to Section 1810.5 of the Califor= nia Financial Code.
------------------------------------------------= --------------------------------------------------------------------------= -----------------------------------------

Please do not reply to this email. This mailbox is not = monitored and you will not receive a response. For assistance,
log in to your PayPal account and choose the Help link located in the top right corn= er of any PayPal page.

To receive email notifications in= plain text instead of HTML, update your preferences here.

 

PayPal Email ID PP120

 

 

----19823882372248713622-- From: falk at isi.edu (Aaron Falk) Date: Mon, 10 Jul 2006 08:54:10 -0400 Subject: [Tmrg-interest] Fwd: Possible Internet2 awards program for new TCP stacks References: <6.2.0.14.2.20060706140924.03dfc1a0@mail.internet2.edu> Message-ID: <88676A75-868F-425B-AD3F-832093BC4A59@ISI.EDU> Of possible interest. --aaron Begin forwarded message: > From: Richard Carlson > Date: July 6, 2006 2:44:48 PM EDT (CA) > To: Joe Touch , Pascale Primet lyon.fr>, Katsushi Kobayashi , Aaron Falk > , "Douglas Leith" , "R. Hughes- > Jones" , "Cottrell, Les" > , Brian Tierney > Cc: Injong Rhee , Sally Floyd , > Jason Leigh , Richard Carlson > > Subject: Possible Internet2 awards program for new TCP stacks > > All; > > Hopefully you all know about the Internet2 Land Speed Record (LSR) > awards program http://lsr.internet2.edu/ Briefly, the LSR awards > program was established in 2000 to promote demonstrations that > highlight TCP's potential. As the rules state, each entry must be > a RFC-791/793 stack with 1 or more sockets sending data between 2 > Internet nodes. > > The original record was 751 Mbps over 5600 Kmeters. The latest > record is 8.8 Gpbs over 30,000 Kmeters (the web page is out of > date). Given that the current record crossed some 10 GE WAN-PHY > links and the rules call for a 10% increase to set a new record, we > may have to wait a while before the next generation 40/100 Gig HW > comes out. > > While these experiments show that TCP is capable of running at line > rates over any distance, we all know that performance can drop > dramatically if losses occur. In fact, if you look at the MRTG > graphs the wining teams supply you can see they make multiple runs > and discard tests that suffer from any type of packet loss. > > Internet2 is looking for some way to promote testing, evaluation, > and growth in the field of TCP stack research. One idea is to > create a new awards program, along the lines of the existing LSR > program. I'm looking for advice and guidance on: > > 1) is this a good idea? > > 2) if so, what rules need to be changes/modified? > a) should some type of loss metric be included? > b) verification mechanisms? > c) is distance an important metric? > d) ensure others duplicate/repeat the experiment > > 3) if not, is there a better way to promote TCP stack research? > > Thanks in advance. Any thought, comments, or suggestions would be > greatly appreciated. Feel free to share this email with your > colleagues and collaborators. > > Regards; > Rich > > > > > ------------------------------------ > > > > Richard A. Carlson e-mail: > RCarlson at internet2.edu > Network Engineer phone: (734) 352-7043 > Internet2 fax: (734) 913-4255 > 1000 Oakbrook Dr; Suite 300 > Ann Arbor, MI 48104 From: lachlan at caltech.edu (lachlan at caltech.edu) Date: Tue, 11 Jul 2006 11:46:42 -0700 (PDT) Subject: [Tmrg-interest] Possible Internet2 awards program for new TCP stacks Message-ID: <9555287327lachlan@caltech.edu> > From: Richard Carlson > Date: July 6, 2006 2:44:48 PM EDT (CA) > > Internet2 is looking for some way to promote testing, evaluation, > and growth in the field of TCP stack research. One idea is to > create a new awards program, along the lines of the existing LSR > program. I'm looking for advice and guidance on: > > 1) is this a good idea? > > 2) if so, what rules need to be changes/modified? > a) should some type of loss metric be included? > b) verification mechanisms? > c) is distance an important metric? > d) ensure others duplicate/repeat the experiment > > 3) if not, is there a better way to promote TCP stack research? > > Thanks in advance. Any thought, comments, or suggestions would be > greatly appreciated. Feel free to share this email with your > colleagues and collaborators. This sounds like a very good idea. If the RFC793 restriction is lifted, it will be important not simply to favour the most aggressive protocol. One approach would be to devise a more comprehensive "benchmark" (AKA "best practice test suite"), which results in a single figure representing throughput, fairness etc. One question would be whether it is better to use "live" networks, or more controllable testbeds. Just as the LSR keeps the *protocol* fixed, the competition for protocols should keep the *hardware* as standard as possible. Here at Caltech, we're looking at setting our WAN-in-Lab up as a common platform for multiple users to benchmark their protocols on. Another possibility would be to specify a common hardware setup (CPU speeds, bus types, NICs, link emulators, all intervening switches, ...). This touches on points (b) and (d): If there are several "standard" testbeds set up, then repeatability can be achieved by repeating on all testbeds, and verifiability can be achieved by having a standard logging system built in to the testbeds. It would be nice if everyone compared their stacks on a standard distance. If that is not practical (for example, if the award is for "live" networks) then reporting (achived_bandwidth*delay) would be more effective. However, high values can be achieved with "slow" (computationally intensive) algorithms by using very long paths (circulating the globe multiple times, for example). David Wei recently pointed out to me the Google FileSystem (GFS) test of running 32 independent parallel file transfers of a given size. The time-to-finish of the last flow measures the efficiency and fairness of the scheme. It can easily be mapped into "Gbps" if that is what people are familiar with. This test alone doesn't capture effects like RTT unfairness, or slow convergence to fairness. In an emulated environment, they could be factored in by passing all 32 flows through different RTTs, drawn from a "realistic" distribution, and starting them at different times. For example, the result could be the maximum time from (starting in increasing order or RTT) and (starting in decreasing order of RTT). That would protect against unfairness from new flows starting too aggressively or not aggressively enough. For this test, it is probably appropriate to neglect factors that are important for more general-purpose benchmarks (such as cross traffic and reverse traffic). $0.02 Lachlan Andrew From: floyd at icir.org (Sally Floyd) Date: Wed, 12 Jul 2006 13:22:32 -0700 Subject: [Tmrg-interest] a report on the Transport Modeling Research Group Message-ID: <200607122022.k6CKMWlt085907@cougar.icir.org> The TMRG mailing list has been largely dormant for many months now, so this is a report of the current status. The TRMG web page is at: "http://www.icir.org/tmrg/". * The draft on "Metrics for the Evaluation of Congestion Control Mechanisms" is essentially complete. It needs a volunteer for a shephard, to see if it is ready for publication. * The draft on "Tools for the Evaluation of Simulation and Testbed Scenarios" needs contributions on the topics of "Distribution of packet sequence numbers", "Characterization of Congested Links in Terms of Bandwidth and Typical Levels of Congestion", and "Characterization of Network Changes Affecting Congestion", characterizing the current state of our knowledge (if any). * Evaluating Congestion Control Mechanisms over Challenging Lower Layers (e.g., wireless). The Tools draft above has a section place-holder for a section on "Characterization of Challenging Lower Layers." This would probably be best in a separate draft, along with best current practices for the evaluation of congestion control mechanisms over these challenging lower layers. * Yong Xia and others are working on a draft giving best current practice for the evaluation of congestion control mechanisms in general scenarios, including single-bottleneck topologies, parking-lot topologiee, and general topologies. This will be sent to the mailing list when it is ready for feedback. - Sally From: andras.veres at ericsson.com (=?iso-8859-1?Q?Andr=E1s_Veres_=28IJ/ETH=29?=) Date: Thu, 13 Jul 2006 11:21:48 +0200 Subject: [Tmrg-interest] a report on the Transport Modeling Research Group Message-ID: Sally, We are doing an investigation on whether there is any foreeable problem with current/future TCP in future high-speed wireless networks at Ericsson. I was reading the work on this list with great interest, and it has helped me a lot defining the right metrics and tools for this work. I have just a few minor comments: In "Metrics for the Evaluation of Congestion Control Mechanisms" the response to changes section states that the congestion control should be responsive to changes due to congestion and route changes. I propose to extend this slightly. In the future (maybe very near future) we expect that most hosts will be connected via wireless/mobile networks to the Internet and mobility will be frequent. It this case it will be very common to perform handoffs between very different cells with different levels of congestion and very different capacity. I suggest that the "response to changes" section should be extented with something like this: "Congestion control mechanisms should respond to sudden bandwidth-delay product changes due to mobility in the future. Such bandwith-delay product changes are expected to be more frequent and expected to have greater impact than path changes today. Due to mobility, both the bandwidth, and the round-trip delay may suddenly change. Due to the heterogenity of wireless access types (802.11b,a,g, WIMAX, WCDMA, HS-WCDMA, E-GPRS, Bluetooth etc), the congestion control protocol has to be able to handle BDP changes (with reasonable efficiency) in the orders of several magnitudes" /Andras -----Original Message----- From: tmrg-interest-bounces at ICSI.Berkeley.EDU [mailto:tmrg-interest-bounces at ICSI.Berkeley.EDU] On Behalf Of Sally Floyd Sent: Wednesday, July 12, 2006 10:23 PM To: tmrg-interest at ICSI.Berkeley.EDU Subject: [Tmrg-interest] a report on the Transport Modeling Research Group The TMRG mailing list has been largely dormant for many months now, so this is a report of the current status. The TRMG web page is at: "http://www.icir.org/tmrg/". * The draft on "Metrics for the Evaluation of Congestion Control Mechanisms" is essentially complete. It needs a volunteer for a shephard, to see if it is ready for publication. * The draft on "Tools for the Evaluation of Simulation and Testbed Scenarios" needs contributions on the topics of "Distribution of packet sequence numbers", "Characterization of Congested Links in Terms of Bandwidth and Typical Levels of Congestion", and "Characterization of Network Changes Affecting Congestion", characterizing the current state of our knowledge (if any). * Evaluating Congestion Control Mechanisms over Challenging Lower Layers (e.g., wireless). The Tools draft above has a section place-holder for a section on "Characterization of Challenging Lower Layers." This would probably be best in a separate draft, along with best current practices for the evaluation of congestion control mechanisms over these challenging lower layers. * Yong Xia and others are working on a draft giving best current practice for the evaluation of congestion control mechanisms in general scenarios, including single-bottleneck topologies, parking-lot topologiee, and general topologies. This will be sent to the mailing list when it is ready for feedback. - Sally _______________________________________________ Tmrg-interest mailing list Tmrg-interest at ICSI.Berkeley.EDU http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: mascolo at poliba.it (Saverio Mascolo) Date: Fri, 14 Jul 2006 16:45:08 +0200 Subject: [Tmrg-interest] a report on the Transport Modeling Research Group Message-ID: <000e01c6a754$94ab8d00$723bccc1@HPSM> in section 3.2.2 "minimizing oscillations" i would add a metrci such as coviarance: cov(x)=stdev(x)/E(x) saverio mascolo On 7/12/06, Sally Floyd wrote: The TMRG mailing list has been largely dormant for many months now, so this is a report of the current status. The TRMG web page is at: "http://www.icir.org/tmrg/". * The draft on "Metrics for the Evaluation of Congestion Control Mechanisms" is essentially complete. It needs a volunteer for a shephard, to see if it is ready for publication. * The draft on "Tools for the Evaluation of Simulation and Testbed Scenarios" needs contributions on the topics of "Distribution of packet sequence numbers", "Characterization of Congested Links in Terms of Bandwidth and Typical Levels of Congestion", and "Characterization of Network Changes Affecting Congestion", characterizing the current state of our knowledge (if any). * Evaluating Congestion Control Mechanisms over Challenging Lower Layers (e.g., wireless). The Tools draft above has a section place-holder for a section on "Characterization of Challenging Lower Layers." This would probably be best in a separate draft, along with best current practices for the evaluation of congestion control mechanisms over these challenging lower layers. * Yong Xia and others are working on a draft giving best current practice for the evaluation of congestion control mechanisms in general scenarios, including single-bottleneck topologies, parking-lot topologiee, and general topologies. This will be sent to the mailing list when it is ready for feedback. - Sally _______________________________________________ Tmrg-interest mailing list Tmrg-interest at ICSI.Berkeley.EDU http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20060714/e26a7349/attachment.html From: floyd at icir.org (Sally Floyd) Date: Sun, 23 Jul 2006 13:07:23 -0700 Subject: [Tmrg-interest] a report on the Transport Modeling Research Group Message-ID: <200607232007.k6NK7NcT059758@cougar.icir.org> Andras - >In "Metrics for the Evaluation of Congestion Control Mechanisms" >the response to changes section states that the congestion control >should be responsive to changes due to congestion and route changes. >I propose to extend this slightly. In the future (maybe very near >future) we expect that most hosts will be connected via wireless/mobile >networks to the Internet and mobility will be frequent. It this >case it will be very common to perform handoffs between very different >cells with different levels of congestion and very different capacity. >I suggest that the "response to changes" section should be extented >with something like this: > >"Congestion control mechanisms should respond to sudden bandwidth-delay >product changes due to mobility in the future. Such bandwith-delay >product changes are expected to be more frequent and expected to >have greater impact than path changes today. Due to mobility, both >the bandwidth, and the round-trip delay may suddenly change. Due >to the heterogenity of wireless access types (802.11b,a,g, WIMAX, >WCDMA, HS-WCDMA, E-GPRS, Bluetooth etc), the congestion control >protocol has to be able to handle BDP changes (with reasonable >efficiency) in the orders of several magnitudes" Good idea. And many thanks for the suggested text. I revised it slightly, and added the following: Congestion control mechanisms also have to contend with sudden changes in the bandwidth-delay product due to mobility. Such bandwith-delay product changes are expected to become more frequent and to have greater impact than path changes today. As a result of both mobility and of the heterogenity of wireless access types (802.11b,a,g, WIMAX, WCDMA, HS-WCDMA, E-GPRS, Bluetooth, etc.), both the bandwidth and the round-trip delay can change suddenly, sometimes by several orders of magnitude. - Sally From: floyd at icir.org (Sally Floyd) Date: Sun, 23 Jul 2006 13:30:32 -0700 Subject: [Tmrg-interest] a report on the Transport Modeling Research Group Message-ID: <200607232030.k6NKUW6v059812@cougar.icir.org> Saverio - >in section 3.2.2 "minimizing oscillations" i would add a metric such as >coviarance: > >cov(x)=3Dstdev(x)/E(x) Many thanks. Will do. - Sally From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 23 Aug 2006 18:07:50 -0700 Subject: [Tmrg-interest] PFLDnet2007 -- Call for papers In-Reply-To: References: Message-ID: Fifth International Workshop on Protocols for Fast, Long Distance networks, 2007 (PFLDnet2007) 7, 8 February 2007, ISI, Marina Del Rey (Los Angeles), California http://wil.cs.caltech.edu/pfldnet2007/cfp.php PAPER SUBMISSION DEADLINE: 13 OCTOBER, 2006 Future networks will increasingly contain high bandwidth, high delay paths. Protocols for fast, long distance networks will have to expand from experimental testbeds and into the wider world. In that process, it is increasingly important for protocols which achieve high utilization on fast, long distance networks also to have the qualities required for widespread use, such as fairness and scalability to many users. PFLDnet2007 solicits papers which will further research towards on protocols for tomorrow's high speed internet, at all stages along the continuum from the requirements of today's grid networks, through benchmarking of proposals, to the final selection of the next generation of TCP. Participants wishing to present a paper should upload a full paper to the submission site by Friday 13 October, 2006. Papers should be typically 4-5 pages, but no longer than 6 pages. Scope: (choose at least one of the subject areas below - multiple selections possible) * Protocol issues in fast long-distance networks * Protocol development o Enhancements of TCP and its variants o Novel data transport protocols designed for new application services o Explicit signaling protocols: Optimization criteria and deployment strategies o Pacing and shaping of TCP and UDP traffic o Parallel transfers and multistreaming * Performance evaluation o Modeling and simulation-based results o Experiments on real networks and actual measurements o Protocol benchmarking * Hardware-specific issues o Transport over optical networks o RDMA over WANs o Protocol implementation and hardware issues (PCs, NICs, TOEs, routers, switches, etc.) o Data replications and striping * Application focus o Requirements and experience from bandwidth demanding applications o Bulk-data transfer applications both TCP and non-TCP based o Transport service for Grids o QoS and scalability issues o Multicast over fast long-distance networks Authors whose papers are selected for presentation will have the option to submit a revised paper, to be published on the PFLDnet 2007 web site and in the PFLDnet 2007 proceedings. Conference Co-Chairs: Lachlan Andrew Aaron Falk Medy Sanadidi TPC Co-Chairs: Lachlan Andrew Doug Leith Medy Sanadidi Important Dates ***Initial paper submission deadline: Friday 13 October, 2006*** Acceptance notification: Monday 4 December, 2006 Final paper submission: Wednesday 24 January, 2007 Workshop: Wednesday and Thursday, 7 and 8 February, 2007 -- Lachlan Andrew, Doug Leith and Medy Sanadidi, TPC co-chairs From: jasani_rohan at bah.com (Jasani Rohan) Date: Thu, 31 Aug 2006 14:04:58 -0400 Subject: [Tmrg-interest] Update to "Tools for the Eval..." I-D Message-ID: All: I'm working with a team of colleagues at Booz Allen Hamilton to write two sections of the "Tools for the Evaluation of Simulation and Testbed Scenarios" Internet-Draft: - Section 16: Characterization of Challenging Lower Layers - Section 17: Characterization of Network Changes Affecting Congestion We have already been in touch with Sally Floyd in regards to the direction and structure for these sections. We will release these sections to the list once they are ready for review by the community. If you have any questions, suggestions or comments in regards to these sections, please do let me know. Thanks Rohan Jasani Booz | Allen | Hamilton 703.984.0337 (office) 832.452.3241 (mobile) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20060831/c2143e1f/attachment.html From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 8 Oct 2006 18:01:18 -0700 Subject: [Tmrg-interest] CFP - PFLDnet2007 -- Extended deadline Message-ID: Fifth International Workshop on Protocols for Fast, Long Distance networks, 2007 (PFLDnet2007) 7, 8 February 2007, ISI, Marina Del Rey (Los Angeles), California http://wil.cs.caltech.edu/pfldnet2007/cfp.php PAPER SUBMISSION DEADLINE: EXTENDED to 20 OCTOBER, 2006 Future networks will increasingly contain high bandwidth, high delay paths. Protocols for fast, long distance networks will have to expand from experimental testbeds and into the wider world. In that process, it is increasingly important for protocols which achieve high utilization on fast, long distance networks also to have the qualities required for widespread use, such as fairness and scalability to many users. PFLDnet2007 solicits papers which will further research towards on protocols for tomorrow's high speed internet, at all stages along the continuum from the requirements of today's grid networks, through benchmarking of proposals, to the final selection of the next generation of TCP. Participants wishing to present a paper should upload a full paper to the submission site by Friday 20 October, 2006. Papers should be typically 4-5 pages, but no longer than 6 pages. Scope: (choose at least one of the subject areas below - multiple selections possible) * Protocol issues in fast long-distance networks * Protocol development o Enhancements of TCP and its variants o Novel data transport protocols designed for new application services o Explicit signaling protocols: Optimization criteria and deployment strategies o Pacing and shaping of TCP and UDP traffic o Parallel transfers and multistreaming * Performance evaluation o Modeling and simulation-based results o Experiments on real networks and actual measurements o Protocol benchmarking * Hardware-specific issues o Transport over optical networks o RDMA over WANs o Protocol implementation and hardware issues (PCs, NICs, TOEs, routers, switches, etc.) o Data replications and striping * Application focus o Requirements and experience from bandwidth demanding applications o Bulk-data transfer applications both TCP and non-TCP based o Transport service for Grids o QoS and scalability issues o Multicast over fast long-distance networks Authors whose papers are selected for presentation will have the option to submit a revised paper, to be published on the PFLDnet 2007 web site and in the PFLDnet 2007 proceedings. Conference Co-Chairs: Lachlan Andrew Aaron Falk Medy Sanadidi TPC Co-Chairs: Lachlan Andrew Doug Leith Medy Sanadidi Important Dates ***Initial paper submission deadline: EXTENDED to Friday 20 October, 2006*** Acceptance notification: Monday 4 December, 2006 Final paper submission: Wednesday 24 January, 2007 Workshop: Wednesday and Thursday, 7 and 8 February, 2007 -- Lachlan Andrew, Aaron Falk, Doug Leith and Medy Sanadidi co-chairs / TPC co-chairs From: sallyfloyd at mac.com (Sally Floyd) Date: Wed, 8 Nov 2006 13:56:06 -0800 Subject: [Tmrg-interest] Metrics for the Evaluation of Congestion Control Mechanisms Message-ID: <9731bcd1eda9a0c22bbe6a63df7e33f2@mac.com> (Reviving the long-dormant TMRG mailing list...) The TMRG internet draft on "Metrics for the Evaluation of Congestion Control Mechanisms", "http://www.ietf.org/internet-drafts/draft-irtf-tmrg-metrics-04.txt", had been fairly stable for quite some time, and I think is ready to be advanced to an Informational RFC. Here is the process ("http://www.isi.edu/~falk/papers/draft-irtf-rfcs-00.txt"): A shephard (me, in this case) asks for feedback from the research group, to determine whether the group thinks that the document is ready to move to an Informational RFC, and to determine the state of consensus of the research group. The state of consensus can range from: "this document represents the consensus of the FOOBAR RG" to: "the views in this document were considered controversial by the FOOBAR RG but the RG reached a consensus that the document should still be published" Once the research group has determined that the document is ready to be advanced to Informational, the draft is forwarded to the IRSG (Internet Research Steering Group) for another stage of broader review. So what this document needs now is a few members of the TMRG to agree to review it in its current revision (from August 2006), and give feedback to the group. The current revision includes feedback from the following: Armando Caro, Dah Ming Chiu, Dado Colussi, Wesley Eddy, Nelson Fonseca, Janardhan Iyengar, Doug Leith, Saverio Mascolo, Sean Moore, Injong Rhee, Andras Veres, and Damon Wischik Volunteers? Many thanks. - Sally http://www.icir.org/tmrg/ From: sallyfloyd at mac.com (Sally Floyd) Date: Sat, 16 Dec 2006 15:12:18 -0800 Subject: [Tmrg-interest] Tools for the Evaluation of Simulation and Testbed Scenarios Message-ID: <8113ce2e0e29f1d1b50ac8987cad3f4c@mac.com> A revised version of this draft has been posted, available from: http://www.ietf.org/internet-drafts/draft-irtf-tmrg-tools-03.txt http://www.ietf.org/internet-drafts/draft-irtf-tmrg-tools-03.ps This draft has new sections on Challenging Lower Layers and on Network Changes affecting Congestion, contributed by Jasani Rohan and others. Any feedback would be welcome. - Sally http://www.icir.org/tmrg/ From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 20 Feb 2007 17:23:09 -0800 Subject: [Tmrg-interest] Metrics for the Evaluation of Congestion Control Mechanisms Message-ID: <21656266c60d6fa2e651cb4c34417230@mac.com> "Metrics for the Evaluation of Congestion Control Mechanisms", internet-draft draft-irtf-tmrg-metrics-07.txt, is available from: http://www.icir.org/tmrg/draft-irtf-tmrg-metrics-07.txt http://www.icir.org/tmrg/draft-irtf-tmrg-metrics-07.ps This draft has finished review within the TMRG (Transport Modeling Research Group), and is ready to be passed to the IRTF for review and publication. However, before I pass this to the IRTF, I have just passed it by ICCRG for feedback. If anyone would like to read it and give feedback, feedback would be appreciated in the next two weeks (that is, by March 6). After that, I will check the TMRG mailing list one final time to determine the level of consensus, and then will forward the draft to IRTF for review and publication as an Informational RFC. Many thanks, - Sally http://www.icir.org/floyd/ History within TMRG: * The first version of the draft was submitted in May 2005. * The draft has had contributions or reviews from the following: Armando Caro, Dah Ming Chiu, Dado Colussi, Wesley Eddy, Nelson Fonseca, Janardhan Iyengar, Doug Leith, Saverio Mascolo, Sean Moore, Injong Rhee, David Ros, Andras Veres, and Damon Wischik, * The procedure for advancing to Informational was outlined in email on November 8 to the TMRG mailing list, and the draft received a review from David Ros. From: lars.eggert at nokia.com (Lars Eggert) Date: Thu, 22 Feb 2007 19:34:02 +0200 Subject: [Tmrg-interest] TSVAREA presentation slots for Prague In-Reply-To: References: Message-ID: Hi, please send agenda requests for TSVAREA in Prague to tsv- ads at tools.ietf.org, including: title/topic presenter requested time The purpose of the TSVAREA meeting is to inform about and discuss important issues, developments and work within the transport area or outside work that impacts the transport area. In contrast to TSVWG, TSVAREA does not produce any documents. TSVAREA can include tutorial- style talks on tranport topics, maybe based on related IRTF or other research work. Again, we encourage relevant presentations from both within the transport area and from outside parties, such as other IETF WGs or the IRTF. Thanks, Lars -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2446 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070222/50052634/attachment.bin From: lars.eggert at nokia.com (ext Lars Eggert) Date: Tue, 6 Mar 2007 17:38:55 +0200 Subject: [Tmrg-interest] preliminary TSVAREA agenda for Prague In-Reply-To: References: Message-ID: TSVAREA Agenda for IETF-68 (Prague) TUESDAY, March 20, 2007 1520-1720 Afternoon Session II Congress III 10 min Note Well Scribes Agenda Bashing 10 min New WG Overview: Congestion and Pre-Congestion Notification (PCN) Scott Bradner and Steven Blake 20 min Bringing Experimental High-Speed Congestion Control to the IETF Lars Eggert 20 min Vista Implementation Report on ECN/FRTO/WS/DSACK Dave Thaler and Murari Sridharan Please email additional agenda requests to tsv-ads at tools.ietf.org. The purpose of the TSVAREA meeting is to inform about and discuss important issues, developments and work within the transport area or outside work that impacts the transport area. In contrast to TSVWG, TSVAREA does not produce any documents. TSVAREA can include tutorial- style talks on tranport topics, maybe based on related IRTF or other research work. Again, we encourage relevant presentations from both within the transport area and from outside parties, such as other IETF WGs or the IRTF. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2446 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070306/851294d7/attachment.bin From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 12 Mar 2007 11:38:57 -0700 Subject: [Tmrg-interest] feedback on draft-irtf-tmrg-metrics? Message-ID: This is a last check of the research group to see that there is rough consensus that "Metrics for the Evaluation of Congestion Control Mechanisms", internet-draft draft-irtf-tmrg-metrics-06.txt, is ready for forward to the IRTF for review and publication as an Informational RFC. This draft finished review in TMRG several months ago, and it just finished a pass of feedback from ICCRG (the Internet Congestion Control Research Group). The feedback included feedback from Michael Welzl and Lachlan Andrew adding explanations to the fairness discussions, and some general feedback from Mark Allman. I would like to forward this to the IRTF next week (March 19). This is a final check of the TMRG that there is rough consensus for this document to be forwarded. The abstract of the document contains the following caveat: This document is a product of the Transport Modeling Research Group (TRMG), and has received detailed feedback from many members of the Research Group (RG). As the document tries to make clear, there is not necessarily a consensus within the research community (or the IETF community, the vendor community, the operations community, or any other community) about the metrics that congestion control mechanisms should be designed to optimize, in terms of tradeoffs between throughput and delay, fairness between competing flows, and the like. However, we believe that there is a clear consensus that congestion control mechanisms should be evaluated in terms of tradeoffs between a range of metrics, rather than in terms of optimizing for a single metric. Thanks - - Sally http://www.icir.org/floyd/ History within TMRG: * The first version of the draft was submitted in May 2005. * The draft has had contributions or reviews from the following: Armando Caro, Dah Ming Chiu, Dado Colussi, Wesley Eddy, Nelson Fonseca, Janardhan Iyengar, Doug Leith, Saverio Mascolo, Sean Moore, Injong Rhee, David Ros, Andras Veres, and Damon Wischik, * The procedure for advancing to Informational was outlined in email on November 8 to the TMRG mailing list, and the draft received a review from David Ros. * Finished a round of feedback from ICCRG on March 6, 2007. Feedback from Mark Allman, Lachlan Andrew, and Michael Welzl. From: michael.welzl at uibk.ac.at (Michael Welzl) Date: Thu, 15 Mar 2007 14:33:42 +0100 Subject: [Tmrg-interest] feedback on draft-irtf-tmrg-metrics? In-Reply-To: References: Message-ID: <1173965622.3332.32.camel@pc105-c703.uibk.ac.at> Dear Sally, dear TMRG'ers, I think that this email should refer to draft-irtf-tmrg-metrics-09.txt ( http://www.icir.org/tmrg/draft-irtf-tmrg-metrics-09.txt ) and not -06 BTW, the reference to RFC 3168 seems to be broken ("BIBREF ..." ) Cheers, Michael On Mon, 2007-03-12 at 11:38 -0700, Sally Floyd wrote: > This is a last check of the research group to see that there is > rough consensus that "Metrics for the Evaluation of Congestion > Control Mechanisms", internet-draft draft-irtf-tmrg-metrics-06.txt, > is ready for forward to the IRTF for review and publication as an > Informational RFC. > > This draft finished review in TMRG several months ago, and it just > finished a pass of feedback from ICCRG (the Internet Congestion > Control Research Group). The feedback included feedback from > Michael Welzl and Lachlan Andrew adding explanations to the fairness > discussions, and some general feedback from Mark Allman. > > I would like to forward this to the IRTF next week (March 19). This > is a final check of the TMRG that there is rough consensus for this > document to be forwarded. The abstract of the document contains > the following caveat: > > This document is a product of the Transport Modeling Research Group > (TRMG), and has received detailed feedback from many members of the > Research Group (RG). As the document tries to make clear, there is > not necessarily a consensus within the research community (or the > IETF community, the vendor community, the operations community, or > any other community) about the metrics that congestion control > mechanisms should be designed to optimize, in terms of tradeoffs > between throughput and delay, fairness between competing flows, and > the like. However, we believe that there is a clear consensus that > congestion control mechanisms should be evaluated in terms of > tradeoffs between a range of metrics, rather than in terms of > optimizing for a single metric. > > Thanks - > > - Sally > http://www.icir.org/floyd/ > > > History within TMRG: > > * The first version of the draft was submitted in May 2005. > > * The draft has had contributions or reviews from the following: > Armando Caro, Dah Ming Chiu, Dado Colussi, Wesley Eddy, > Nelson Fonseca, Janardhan Iyengar, Doug Leith, Saverio Mascolo, Sean > Moore, Injong Rhee, David Ros, Andras Veres, and Damon Wischik, > > * The procedure for advancing to Informational was outlined in > email on November 8 to the TMRG mailing list, and the > draft received a review from David Ros. > > * Finished a round of feedback from ICCRG on March 6, 2007. > Feedback from Mark Allman, Lachlan Andrew, and Michael Welzl. > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: sallyfloyd at mac.com (Sally Floyd) Date: Thu, 15 Mar 2007 09:22:09 -0700 Subject: [Tmrg-interest] feedback on draft-irtf-tmrg-metrics? In-Reply-To: <1173965622.3332.32.camel@pc105-c703.uibk.ac.at> References: <1173965622.3332.32.camel@pc105-c703.uibk.ac.at> Message-ID: <20734715cad4797af239cebceb42b1f4@mac.com> > I think that this email should refer to > draft-irtf-tmrg-metrics-09.txt > ( http://www.icir.org/tmrg/draft-irtf-tmrg-metrics-09.txt ) > and not -06 Oops. Sorry, you are exactly right. > BTW, the reference to RFC 3168 seems to be broken ("BIBREF ..." ) Many thanks. I will fix it. (I left out a period...) - Sally http://www.icir.org/floyd/ From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 20 Mar 2007 11:50:20 -0700 Subject: [Tmrg-interest] forwarding draft-irtf-tmrg-metrics-09.txt to the IRTF for review Message-ID: <0462c73435f2a252b5335aea2a3a059e@mac.com> Aaron - This is to forward the draft draft-irtf-tmrg-metrics-09.txt to the IRTF, to be considered for Informational. The draft is available at "http://www.icir.org/tmrg/draft-irtf-tmrg-metrics-09.txt", and will be submitted to the internet-drafts editor at the end of this IETF. The report on the document is appended below. If this needs anything more, let me know. Many thanks, - Sally http://www.icir.org/floyd/ --------------------------------------------------------- Document shepherd: Sally Floyd Has the document had adequate review? Yes, the document has had strong reviews, from both TMRG and ICCRG members and others. The draft has had contributions or reviews from the following: Armando Caro, Dah Ming Chiu, Dado Colussi, Wesley Eddy, Nelson Fonseca, Janardhan Iyengar, Doug Leith, Saverio Mascolo, Sean Moore, David Ros, Injong Rhee, David Ros, Andras Veres, and Damon Wischik, Feedback from the ICCRG has come from Mark Allman, Lachlan Andrew, and Michael Welzl. Does the Document Shepherd have concerns that the document needs more review from a particular or broader perspective? Nope. It is a pretty low-key document. Does the Document Shepherd have any specific concerns or issues with this document that the IRTF should be aware of? There isn't any particular consensus in the IETF or research communities about the metrics that congestion control should be designed to optimize. The document makes this explicit. From the abstract: This document is a product of the Transport Modeling Research Group (TRMG), and has received detailed feedback from many members of the Research Group (RG). As the document tries to make clear, there is not necessarily a consensus within the research community (or the IETF community, the vendor community, the operations community, or any other community) about the metrics that congestion control mechanisms should be designed to optimize, in terms of tradeoffs between throughput and delay, fairness between competing flows, and the like. However, we believe that there is a clear consensus that congestion control mechanisms should be evaluated in terms of tradeoffs between a range of metrics, rather than in terms of optimizing for a single metric. How solid is the RG consensus behind this document? It is a very low-activity RG, but none of the feedback, which is from a wide range of people, has expressed unresolved problems with the document. The paragraph above from the abstract tries to make clear the nature of the consensus. Has the Document Shepherd personally verified that the document satisfies all ID nits? Yep. --------------------------------------------------------- From: wanggang at research.nec.com.cn (Wang gang) Date: Wed, 18 Apr 2007 15:16:30 +0800 Subject: [Tmrg-interest] Internet draft submission: draft-irtf-tmrg-ns2-tool-00.txt Message-ID: <00d501c78189$7cd1dee0$c44c1cac@ad.research.nec.com.cn> Dear colleagues, We have submitted a TMRG draft draft-irtf-tmrg-ns2-tool-00.txt in the attached file, which introduces the rationale and details that have been considered and implemented in an NS2 TCP evaluation tool suite. The tool suite includes a set of TCL scripts targeted at saving researchers' efforts in NS2 simulations and will be released a little bit later. We would like to have your comments and advices in order to make the tool become a more useful one for the researchers in this area. Many thanks for your kindly review. Best Regards. Gang Wang. ---------------------------------------- Gang Wang NEC Labs, China 010-62705962/63 (ext.511) wanggang at research.nec.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070418/00e733c5/attachment-0001.html -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: draft-irtf-tmrg-ns2-tool-00.txt Url: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070418/00e733c5/attachment-0001.txt From: wanggang at research.nec.com.cn (Wang gang) Date: Thu, 10 May 2007 13:34:15 +0800 Subject: [Tmrg-interest] draft-irtf-tmrg-ns2-tool-00.txt References: <88d780b40704271759w3dff6300qb727abc7192991af@mail.gmail.com> <001001c78932$a335a340$c44c1cac@ad.research.nec.com.cn> <88d780b40704271820i62445fe2w4ff5070c27922eac@mail.gmail.com> Message-ID: <007a01c792c4$d92bcc40$c44c1cac@ad.research.nec.com.cn> Dear all, The webpage for 'An NS2 TCP Evaluation Tool' is ready. http://labs.nec.com.cn/tcpeval.htm We are building the project on SourceForge.net right now, and looking forward to corporate with you. Best Regards, Gang Wang and Yong Xia. From: both at bothom.de (Thomas Michael Bohnert) Date: Thu, 14 Jun 2007 00:50:22 +0100 Subject: [Tmrg-interest] CFP: 2nd IEEE WORKSHOP ON BROADBAND WIRELESS ACCESS (BWA) Message-ID: <200706140050.22558.both@bothom.de> "Please accept our apologies if you receive multiple copies." ************************************************************************ 2nd IEEE WORKSHOP ON BROADBAND WIRELESS ACCESS (BWA) (website will be online soon) colocated with 5th IEEE Consumer Communications & Networking Conference IEEE CCNC 2008, January 2008, Las Vegas, Nevada http://www.ieee-ccnc.org/ ************************************************************************ CALL FOR PAPERS Internet access is undergoing a fundamental change. A steadily increasing spectrum of services is attracting a rapidly growing number of users which, in turn, wish to access these services 'anytime and anywhere'. In order to meet this demand, Broadband Wireless Access (BWA) technologies are becoming extremely important and vendors and standardisation bodies respond to this development with new and powerful BWA technologies. Supporting transmission rates up to several megabits per second at distances far as tens of kilometres while providing full mobility support, these technologies provide the long-awaited means for delivering any telecommunication service over the Internet. BWA technologies are yet in their infancy and one outcome is that many are far from being complete and optimised for such a versatile environment like the Internet. Consequently, BWA is currently receiving much attention by the research community. By organising this 2nd IEEE BWA Workshop, it is our intention to bring together and provide an international forum for this research community. In this line, the workshop programme covers various aspects of these technologies including but not limited to: - Wireless Metropolitan Area Networks - Incumbent and future BWA Technologies, 802.16x, 802.20, 802.11x, 3G/4G etc - QoS in Mobile and BWA Networks - Radio Resource Management, Admission Control and Scheduling - Capacity Planning and Traffic Engineering - Physical and Data link Layer Issues - Characterization, Modeling of BWA Traffic, Mobility and Channels - Large-scale and Heterogeneous BWA Evaluations - Spectrum Management - Interoperability Aspects (fixed/mobile LANs/MANs, WANs) - Vertical and Horizontal Integration - Micro and Macromobility Management - Wireless Applications Support - Cross-layer Optimisation Approaches - Experimental Evaluation of Cross-layer Interactions - Wireless Network Management - (W)NGN Architectures and Trends - Design and Evaluation of Testbeds - Experiences/lessons from recent deployments PAPER SUBMISSION Submitted papers must represent original material that is not currently under review in any other conference or journal, and has not been previously published. Paper length should not exceed five-page technical paper manuscript. Please see the Author Information page for submission guidelines in the CCNC website. The paper should be used as the basis for a 20-30 minute workshop presentation. Papers should be submitted in a .pdf or .ps format by selecting CCNC'08 at the EDAS paper submission website EDAS and then selecting the workshop submission link. A separate cover sheet should show the title of the paper, the author(s) name(s) and affiliation(s), and the address (including e-mail, telephone, and fax) to which the correspondence should be sent. All submitted papers will be reviewed by up to three experts and if accepted, published in the conference proceedings, which will be available at IEEEeXplore and registered in Engineering Index (EI). At least one author of accepted papers is required to register at the full registration rate. IMPORTANT DATES Paper Submission: 29 July 2007 Author Notification: 14 September 2007 Camera-ready Copy: 5 October 2007 Author Registration Deadline: 10 October 2007 Workshop date: 12 January 2008 GENERAL CHAIRS Thomas Michael Bohnert, University of Coimbra, PT Dmitri Moltchanov, Tampere University of Technology, FI TPC CHAIR Dirk Staehle, University of Wuerzburg, DE PUBLICITY CHAIRS Yevgeni Koucheryavy, Tampere University of Technology, FI Edmundo Monteiro, University of Coimbra, PT TECHNICAL PROGRAM COMMITTEE Alexandre Fonte, Polytechnical Institute of Castelo Branco, PT Cedric Westphal, Nokia Siemens Research Center, US Eckhart Koerner, University of Applied Sciences Mannheim, DE Eugen Borcoci, University Politehnica of Bucharest, RO Francis Lee Bu Sung, Nanyang Technological University, SG Francisco Barcelo-Arroyo, Universitat Politecnica de Catalunya, ES Gabor Fodor, Ericsson Research, SE Geng-Sheng Kuo, Beijing University of Post and Telecommunication, PRC Georgios Paschos, University of Patras, GR Geert Heijenk, University of Twente, NL Giovanni Giambene, University of Siena, IT Jorge S? Silva, University of Coimbra, PT Jorma Kilpi, VTT Research Centre, FI Madjid Merabti, Liverpool John Moores University, UK Nelson da Fonseca, Universidade Estadual de Campinas, BR Nicola Ciulli, Consorzio Pisa Ricerche, IT Paulo Simoes, University of Coimbra, PT Saverio Mascolo, Politecnico di Bari, IT Sean Murphy, University College Dublin, IE Torsten Braun, University of Bern, CH Vasilios Siris, University of Crete, GR Vasos Vassiliou, University of Cyprus, GR Xavier P?rez Costa, NEC Network Research Labs, DE From: lars.eggert at nokia.com (Lars Eggert) Date: Thu, 14 Jun 2007 15:05:17 +0300 Subject: [Tmrg-interest] TSVAREA meeting in Chicago In-Reply-To: References: Message-ID: <68A4347D-841E-482A-930B-A957EEE9790B@nokia.com> On 2007-5-28, at 15:59, ext Lars Eggert wrote: > We are planning to have our usual open Transport Area meeting in > Chicago. Please send agenda requests to the ADs. We have not received any requests for agenda time. If you would like a slot, please reply by June 26 at the latest. (In the absence of agenda requests, we'll not meet.) The purpose of the TSVAREA meeting is to inform about and discuss important issues, developments and work within the transport area or outside work that impacts the transport area. In contrast to TSVWG, TSVAREA does not produce any documents. TSVAREA can include tutorial- style talks on tranport topics, maybe based on related IRTF or other research work. Again, we encourage relevant presentations from both within the transport area and from outside parties, such as other IETF WGs or the IRTF. Lars PS: Reply-to set to tsv-area at ietf.org. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2446 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070614/3e4b3688/attachment.bin From: zimmermann at i4.informatik.rwth-aachen.de (Alexander Zimmermann) Date: Tue, 19 Jun 2007 13:45:41 +0200 Subject: [Tmrg-interest] draft-irtf-tmrg-metrics-09 Message-ID: <6693B5D1-B161-4492-980A-310F9BF9889F@i4.informatik.rwth-aachen.de> Hi all, some minor corrections in the reference list: pages 8, 20: [FF98] => [FF99] page 20: [HKLRZX06]: where is ms/mr "Z"? Regards Alex // // Dipl.-Inform. Alexander Zimmermann // Department of Computer Science 4, RWTH Aachen University // Ahornstr. 55, 52074 Aachen, Germany // phone: (49-241) 80-21422, fax: (49-241) 80-22220 // email: zimmermann at cs.rwth-aachen.de // web: http://www.nets.rwth-aachen.de/mcg/ // -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070619/cb74327d/attachment.html -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: Signierter Teil der Nachricht Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070619/cb74327d/attachment.bin From: sallyfloyd at mac.com (Sally Floyd) Date: Thu, 21 Jun 2007 15:37:08 -0700 Subject: [Tmrg-interest] draft-irtf-tmrg-metrics-09 In-Reply-To: <6693B5D1-B161-4492-980A-310F9BF9889F@i4.informatik.rwth-aachen.de> References: <6693B5D1-B161-4492-980A-310F9BF9889F@i4.informatik.rwth-aachen.de> Message-ID: <2141299c04d2f2f65ce6cb04c052cc26@mac.com> > some minor?corrections in the reference list: Thanks, I will fix them. - Sally http://www.icir.org/floyd/ From: falk at ISI.EDU (Aaron Falk) Date: Fri, 22 Jun 2007 16:11:04 -0700 Subject: [Tmrg-interest] [IRSG] IRSG Review: draft-irtf-tmrg-metrics-09 In-Reply-To: References: Message-ID: <91A9FE4D-E418-4E10-9A57-B9E806C21379@isi.edu> On Jun 22, 2007, at 4:00 PM, Tony Li wrote: > [Issue 2] It should be noted that as of the > time of this review, several of the references are now outdated. > These can easily be found through the idnits tool. These should be > updated before publication. FYI, the RFC Editor will update references to Internet Drafts to the most recent revision. --aaron From: sallyfloyd at mac.com (Sally Floyd) Date: Fri, 22 Jun 2007 22:24:33 -0700 Subject: [Tmrg-interest] [IRSG] IRSG Review: draft-irtf-tmrg-metrics-09 In-Reply-To: References: Message-ID: <72efd43c2ac0b04b25fa58e51a4d5ee8@mac.com> Tony - Many thanks for the review for the IRSG. > This is a review of draft-irtf-tmrg-metrics-09, in accordance with > draft-irtf-rfcs-01, section 5.2.2. This review raises two issues > (see [Issue x] below) that should be resolved prior to proceeding > with publication. ... > * There must be a paragraph near the beginning (for example, in > the introduction) describing the level of support for publication. > Example text might read: "this document represents the consensus of > the FOOBAR RG" or "the views in this document were considered > controversial by the FOOBAR RG but the RG reached a consensus that > the document should still be published". > > [Issue 1] Present in the abstract. This text should be replicated > into the body of the document. Replacing the last paragraph of the > introduction with a copy of the last paragraph from the abstract > should suffice. Thanks, I will do that. > * There should be citations and references to relevant research > publications. > > The references fill 4.5 pages and are frequently cited throughout > the text. Not being a subject matter expert, I am not prepared to > judge their relevancy. [Issue 2] It should be noted that as of the > time of this review, several of the references are now outdated. > These can easily be found through the idnits tool. These should be > updated before publication. I will make sure that the references are updates. - Sally http://www.icir.org/floyd/ From: tli at cisco.com (Tony Li) Date: Fri, 22 Jun 2007 16:00:58 -0700 Subject: [Tmrg-interest] IRSG Review: draft-irtf-tmrg-metrics-09 Message-ID: Hi all, This is a review of draft-irtf-tmrg-metrics-09, in accordance with draft-irtf-rfcs-01, section 5.2.2. This review raises two issues (see [Issue x] below) that should be resolved prior to proceeding with publication. This document is very well written. I found the text to be clear, concise, direct and very comprehensible. Where the text gets specific, there is ample reference to other detailed explanations. Any researcher entering this field for the first time would find this document very accessible and an excellent introduction to the area. The document has had ample technical review in the research group. Previous editions of this document and their publication dates: 00 August 2005 01 October 2005 02 June 2006 03 June 2006 04 August 2006 05 November 2006 06 December 2006 07 February 2007 08 March 2007 09 March 2007 There is a change log included in the document that is two full pages and includes the names of the many contributors. The acknowledgments section also highlights the breadth of contribution and review that the document has received, with 17 individuals listed. Section 5.1 requirements: * There must be a statement in the abstract identifying it as the product of the RG Present * There must be a paragraph near the beginning (for example, in the introduction) describing the level of support for publication. Example text might read: "this document represents the consensus of the FOOBAR RG" or "the views in this document were considered controversial by the FOOBAR RG but the RG reached a consensus that the document should still be published". [Issue 1] Present in the abstract. This text should be replicated into the body of the document. Replacing the last paragraph of the introduction with a copy of the last paragraph from the abstract should suffice. * The breadth of review the document has received must also be noted. For example, was this document read by all the active contributors, only three people, or folks who are not "in" the RG but are expert in the area? It is clear from the number of contributors that the document was widely read. * It must also be very clear throughout the document that it is not an IETF product and is not a standard. This is as clear as can be expressed within the context of an Internet draft. It should be noted that Internet drafts necessarily have a substantial amount of IETF boilerplate. * If an experimental protocol is described, appropriate usage caveats must be present. No protocol is described. * If the protocol has been considered in an IETF working group in the past, this must be noted in the introduction as well. No protocol is described. * There should be citations and references to relevant research publications. The references fill 4.5 pages and are frequently cited throughout the text. Not being a subject matter expert, I am not prepared to judge their relevancy. [Issue 2] It should be noted that as of the time of this review, several of the references are now outdated. These can easily be found through the idnits tool. These should be updated before publication. Tony Li co-chair, Routing Research Group From: wanggang at research.nec.com.cn (Wang gang) Date: Wed, 8 Aug 2007 10:21:40 +0800 Subject: [Tmrg] A reminder about An NS2 TCP Evaluation Tool Suite Message-ID: <015901c7d962$dacd2b90$c44c1cac@ad.research.nec.com.cn> Dear colleagues, We have released the tool 'An NS2 TCP Evaluation Tool Suite' for some time. Since then, we have received some feed back from users. We expect to receive wider comments, and seek collaborations or contributions to make the tool towards a useful one. The download page is, http://labs.nec.com.cn/tcpeval.htm Here is a brief introduction, This tool is motivated by the observation that there is significant overlap among (but lack of an agreed set of) the topologies, traffic, and metrics used by many researchers in the evaluation of TCP alternatives: effort could be saved by starting research from an existing framework. As such, our tool includes several typical topologies and traffic models; it measures some of the most important metrics commonly used in TCP evaluation; and it can automatically generate simulation statistics and graphs ready for inclusion in latex and html documents. The tool also contains an extendable open-source framework. With community effort, we hope the tool evolves into a widely accepted, well-defined set of TCP performance evaluation benchmarks. Best Regards. Gang Wang. ---------------------------------------- Gang Wang NEC Labs, China 010-62705962/63 (ext.511) wanggang at research.nec.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070808/ec1f89e5/attachment.html From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 15 Aug 2007 19:44:39 -0700 Subject: [Tmrg] A reminder about An NS2 TCP Evaluation Tool Suite In-Reply-To: <015901c7d962$dacd2b90$c44c1cac@ad.research.nec.com.cn> References: <015901c7d962$dacd2b90$c44c1cac@ad.research.nec.com.cn> Message-ID: Greetings Gang Wang, Your tool looks nice. Here are some suggestions: 1. Each topology seems to specify a single bottleneck capacity and RTT. Is there a way to make it test for a a range of capacities, like 10, 100, 155 and 622 Mbps? 2. It specifies a diff_RTT variable so that flows can have different RTTs, but it seems that RTTs are equally spaced within the allowed range. This equal spacing may cause artefacts. More importantly, real RTTs aren't uniformly distributed. It would be good to have a more realistic distribution of RTTs. (If they're generated randomly, it will be important to make it repeatable still.) 3. Jain's measure of fairness does not reflect the user's experience. - A fairness measure should give more weight to flows receiving very low throughput. If 10 flows get equal throughput, and one flow gets nothing, that is very unfair, but scores highly in Jain's index. - This can partly be overcome by applying Jain's index to the *download times* instead of the rates. As an approximation of the download time, you could use the reciprocal of the rate. - Jain's measure also doesn't consider the impact of multiple bottlenecks. In a parking-lot topology with links of unequal capacity, the "fairest" solution IMO is for the flow which only uses the high-capacity link not to be restricted by the fact that there is another low-capacity link which it doesn't use. Jain's index only gives a high score if the high-capacity link is under-utilized. 4. The parking lot topology is very symmetric. It would be interesting to look at parking-lot topologies with different bandwidths on the different bottlenecks. Cheers, Lachlan On 07/08/07, Wang gang wrote: > > Dear colleagues, > > We have released the tool 'An NS2 TCP Evaluation Tool Suite' for some time. > Since > then, we have received some feed back from users. We expect to receive wider > comments, > and seek collaborations or contributions to make the tool towards a useful > one. > > The download page is, > http://labs.nec.com.cn/tcpeval.htm > > > Here is a brief introduction, > This tool is motivated by the observation that there is significant overlap > among (but lack > of an agreed set of) the topologies, traffic, and metrics used by many > researchers in the > evaluation of TCP alternatives: effort could be saved by starting research > from an existing > framework. As such, our tool includes several typical topologies and > traffic models; it measures > some of the most important metrics commonly used in TCP evaluation; and it > can automatically > generate simulation statistics and graphs ready for inclusion in latex and > html documents. The > tool also contains an extendable open-source framework. With community > effort, we hope the > tool evolves into a widely accepted, well-defined set of TCP performance > evaluation benchmarks. > > Best Regards. > > Gang Wang. > > ---------------------------------------- > Gang Wang > NEC Labs, China > 010-62705962/63 (ext.511) > > wanggang at research.nec.com.cn > > > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 20 Aug 2007 07:52:35 -0700 Subject: [Tmrg] A reminder about An NS2 TCP Evaluation Tool Suite In-Reply-To: <5.1.1.8.2.20070820232343.07eb4e30@mail.jp.nec.com> References: <015901c7d962$dacd2b90$c44c1cac@ad.research.nec.com.cn> <5.1.1.8.2.20070820232343.07eb4e30@mail.jp.nec.com> Message-ID: Greetings Hide, On 20/08/07, Hideyuki Shimonishi wrote: > > Nice to talk to you again. Yes, good to hear from you. I hope you don't mind, but I'm Cc'ing this to tmrg. > It may be useful to consider distribution of per-flow throughput, rather > than some statistical values. > Also, in multiple-bottleneck topology, we may have to consider > alpha-proportional fairness, i.e. resource fairness v.s. throughput fairness. Good point. I was also thinking that it would be good both to evaluate the total "utility" based on some sort of alpha-fairness, and also try to evaluate what "alpha" is the best approximation in the case of multiple links. > Some results are shown in my PFLDnet 2007 presentation. > Some results about throughput distribution are shown in pp17-18. > Some results about fairness are shown in left figure of page 21, which > shows AReno, compound-TCP, and Hamilton-TCP are rather throughput fair, and > others are rather resource fair. I do not think this figure is the best, we > may need to use another statistics to show this tradeoff. OK, I'll check out those figures. > >4. The parking lot topology is very symmetric. It would be > >interesting to look at parking-lot topologies with different > >bandwidths on the different bottlenecks. > > As you may know since Cesar has presented our tool at ICCRG, our NEC-UCLA > tool should be one other option to do simulations in complex topologies. Yes, I saw that presentation. One feature I really like about that tool is the way it compares very systematically against Reno using exactly the same traffic. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: h-shimonishi at cd.jp.nec.com (Hideyuki Shimonishi) Date: Tue, 21 Aug 2007 02:21:40 +0900 Subject: [Tmrg] A reminder about An NS2 TCP Evaluation Tool Suite In-Reply-To: References: <5.1.1.8.2.20070820232343.07eb4e30@mail.jp.nec.com> <015901c7d962$dacd2b90$c44c1cac@ad.research.nec.com.cn> <5.1.1.8.2.20070820232343.07eb4e30@mail.jp.nec.com> Message-ID: <5.1.1.8.2.20070821021410.04f3eb40@mail.jp.nec.com> Hi Lachlan, At 07/08/20 07:52 -0700, Lachlan Andrew wrote: >Greetings Hide, > >On 20/08/07, Hideyuki Shimonishi wrote: > > > > Nice to talk to you again. > >Yes, good to hear from you. I hope you don't mind, but I'm Cc'ing >this to tmrg. > > > It may be useful to consider distribution of per-flow throughput, rather > > than some statistical values. > > Also, in multiple-bottleneck topology, we may have to consider > > alpha-proportional fairness, i.e. resource fairness v.s. throughput fairness. > >Good point. I was also thinking that it would be good both to >evaluate the total "utility" based on some sort of alpha-fairness, >and also try to evaluate what "alpha" is the best approximation in the >case of multiple links. I have no idea what alpha is better or not, but I think it would be valuable to study what protocol is more resource-fair than Reno and what protocol is more throughput-fair than Reno. > > Some results are shown in my PFLDnet 2007 presentation. > > Some results about throughput distribution are shown in pp17-18. > > Some results about fairness are shown in left figure of page 21, which > > shows AReno, compound-TCP, and Hamilton-TCP are rather throughput fair, and > > others are rather resource fair. I do not think this figure is the best, we > > may need to use another statistics to show this tradeoff. > >OK, I'll check out those figures. Also, please check slide 14. It looks like that AReno and Hamilton have similar alpha with Reno, and others are more resource fair. > > >4. The parking lot topology is very symmetric. It would be > > >interesting to look at parking-lot topologies with different > > >bandwidths on the different bottlenecks. > > > > As you may know since Cesar has presented our tool at ICCRG, our NEC-UCLA > > tool should be one other option to do simulations in complex topologies. > >Yes, I saw that presentation. One feature I really like about that >tool is the way it compares very systematically against Reno using >exactly the same traffic. Thanks. Your comments two years ago about the tool was really helpful to me to develop the method ! Thanks, HIDE >Cheers, >Lachlan > >-- >Lachlan Andrew Dept of Computer Science, Caltech >1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 20 Aug 2007 21:19:35 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: Greetings all, On 8-9 November, a few of us will be getting together at Caltech for a "round table" to try to agree on some basic parameters and metrics for TCP evaluation. We won't try to answer things like "what fairness metric is best", but we can agree on some basic parameters. The situation we're trying to avoid is: Group A finds that at 500Mbps, flow 1 reaches 10% of its final throughput after 30s Group B finds that at 622Mbps, flow 1 reaches 20% of its final throughput after 20s I'll try to have live video-conferencing via VRVS so thta those who can't come in person can still participate. Unfortunately, our videoconferencing room is small, and so physical attendence will probably be limited to a dozen or so people. Please let me know if you're interested. As basic goals, I'd like to come away from the roundtable with: - a set of bandwidths that are of interest, say 10, 155, 622, 2500 Mbps - a set of buffer sizes that are of interest, like BDP or 16384 packets - a set of distributions of RTT that are of interest - an agreed notion of "convergence time" -- e.g., "the average over period x is within y% of the final average - an agreed notion of "time to converge to fairness" -- e.g., "the ratio of averages over period x is within y% of the final ratio" -- should this metric depend on the final ratio achieved? - an agreed notion of "intraflow variability" -- e.g., what timescales are of interest? - an agreed set of traffic models for background traffic Injong has added to that list: - a measure of total link utilization - fluctuation in utilization due to fluctuation in background traffic. - What is the per-flow fair bandwidth share? I think some of those will be "easy", and we can sort them out on the list before an interactive meeting, to save time for more debatable ones. It would be good if everyone can throw in some ideas for the next couple of months so that we can see issues are the hard ones. It would be good to have a common set of scenarios which could be tested by simulation, emulation and real networks. Obviously, the simulation is the most flexible, and so it may have a larger set of tests, but we can at least simulate the emulated cases. Ulterior motive: I'd like people also to simulate/emulate the scenarios that can also be tested on WAN-in-Lab :) If this works, we can get more ambitious in a second roundtable elsewhere. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 20 Aug 2007 21:36:41 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: Greetings again, On 20/08/07, Lachlan Andrew wrote: > As basic goals, I'd like to come away from the roundtable with: > - a set of bandwidths that are of interest, say 10, 155, 622, 2500 Mbps > > I think some of those will be "easy", and we can sort them out on the > list before an interactive meeting, to save time for more debatable > ones. To try my hypothesis, I think that bandwidths should be easy to agree on on-list. Obvious candidates are: 10 Mbit/s -- old Ethernet, the right ball-part for current ADSL/cable 54Mbit/s -- 802.11a/g 100Mbit/s -- Fast Ethernet 155Mbit/s -- OC3/STM-1 400Mbit/s -- used by Doug and Injong's Dummynet studies (IIRC) 622Mbit/s -- OC12/STM-4 1000Mbit/s -- GbE 2488Mbit/s -- OC48/STM-16 9952Mbit/s -- OC192/STM-64 10Gbit/s -- 10GbE Can we assume that Moore's law now allows Dummynet to run at 622Mbit/s? If so, I'd strike 400Mbit/s off in favour of OC12. With my WAN-in-Lab hat on, I'd vote for GbE, OC48 and 10GbE, since we have those. Can we agree on using 10Mbit/s 100Mbit/s 622Mbit/s 1000Mbit/s 2488Mbit/s 10Gbit/s in all simulations/experiments, unless there is a reason to deviate? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 20 Aug 2007 22:06:38 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: Greetings again, On 20/08/07, Lachlan Andrew wrote: > - a measure of total link utilization Again, I think this should be easy to agree on. Rather than simply measuring total bit/s over the link (which can be achieved by inducing congestion collapse), I would advocate using the sum of the *receive* rates of the flows using each link. For multi-link topologies, this would count the rate of multi-hop flows multiple times, and so is different from the "total network throughput" (sum of all flow rates). It is also more "useful" than measuring the total network throughput, since the latter is maximized by giving all capacity to any single-hop flows using a link. Thoughts? Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lars.eggert at nokia.com (Lars Eggert) Date: Tue, 21 Aug 2007 12:12:07 +0300 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: Hi, On 2007-8-21, at 7:36, ext Lachlan Andrew wrote: > Can we assume that Moore's law now allows Dummynet to run at > 622Mbit/s? If so, I'd strike 400Mbit/s off in favour of OC12. during my thesis, I found that dummynet doesn't simulate high- datarate paths very accurately anymore once the CPU becomes loaded: However, simulating wide-area Gigabit links with Dummynet is problematic [ZEC2003]. Dummynet uses the kernel firewall to identify packets for processing, and depends heavily on the kernel timers to control when packets leave the transmission buffer. Both mechanisms incur significant overheads at high data rates. Furthermore, high data rates cause high interrupt loads, which can decrease system responsiveness and eventually lead to livelock [MOGUL1997]. Because Dummynet processing occurs at the IP layer, device interrupts cause delays that reduce the accuracy of the simulation. These delays can also interfere with user-space processing, and as a result affect the benchmark processes themselves. [ZEC2003] Marko Zec and Miljenko Mikuc. Real-Time IP Network Simulation at Gigabit Data Rates. Proc. International Conference on Telecommunications (ConTEL), Zagreb, Croatia, June 11-13, 2003. As you said, Moore's law may have pushed up the region of bandwidth that can be accurately simulated, but it'd be good to have some verification. Lars -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2446 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070821/c6095385/attachment.bin From: doug.leith at nuim.ie (Douglas Leith) Date: Wed, 22 Aug 2007 08:26:57 +0100 Subject: [Tmrg] Tmrg-interest Digest, Vol 11, Issue 4 In-Reply-To: References: Message-ID: Re dummynet, my experience these days is that it can run up to 1Gb using modern hardware. In my experience more of an issue is that end hosts can still have difficulty at high bandwidth-delay products due to sack processing overhead etc. and it is this that has placed the upper limit on test speeds rather than anything else. I'm not sure where things break these days but it would be easy enough to check. Doug On 21 Aug 2007, at 20:00, tmrg-interest-request at ICSI.Berkeley.EDU wrote: > Send Tmrg-interest mailing list submissions to > tmrg-interest at ICSI.Berkeley.EDU > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > or, via email, send a message with subject or body 'help' to > tmrg-interest-request at ICSI.Berkeley.EDU > > You can reach the person managing the list at > tmrg-interest-owner at ICSI.Berkeley.EDU > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Tmrg-interest digest..." > > > Today's Topics: > > 1. TCP evaluation suite round-table (Lachlan Andrew) > 2. Re: TCP evaluation suite round-table (Lachlan Andrew) > 3. Re: TCP evaluation suite round-table (Lachlan Andrew) > 4. Re: TCP evaluation suite round-table (Lars Eggert) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 20 Aug 2007 21:19:35 -0700 > From: "Lachlan Andrew" > Subject: [Tmrg] TCP evaluation suite round-table > To: tmrg-interest at ICSI.Berkeley.EDU > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Greetings all, > > On 8-9 November, a few of us will be getting together at Caltech for a > "round table" to try to agree on some basic parameters and metrics for > TCP evaluation. > > We won't try to answer things like "what fairness metric is best", but > we can agree on some basic parameters. The situation we're trying to > avoid is: > > Group A finds that at 500Mbps, flow 1 reaches 10% of its final > throughput after 30s > Group B finds that at 622Mbps, flow 1 reaches 20% of its final > throughput after 20s > > I'll try to have live video-conferencing via VRVS > so thta those who can't come in > person can still participate. Unfortunately, our videoconferencing > room is small, and so physical attendence will probably be limited to > a dozen or so people. Please let me know if you're interested. > > As basic goals, I'd like to come away from the roundtable with: > - a set of bandwidths that are of interest, say 10, 155, 622, 2500 > Mbps > - a set of buffer sizes that are of interest, like BDP or 16384 > packets > - a set of distributions of RTT that are of interest > - an agreed notion of "convergence time" > -- e.g., "the average over period x is within y% of the > final average > - an agreed notion of "time to converge to fairness" > -- e.g., "the ratio of averages over period x is within y% > of the final ratio" > -- should this metric depend on the final ratio achieved? > - an agreed notion of "intraflow variability" > -- e.g., what timescales are of interest? > - an agreed set of traffic models for background traffic > > Injong has added to that list: > - a measure of total link utilization > - fluctuation in utilization due to fluctuation in background traffic. > - What is the per-flow fair bandwidth share? > > I think some of those will be "easy", and we can sort them out on the > list before an interactive meeting, to save time for more debatable > ones. It would be good if everyone can throw in some ideas for the > next couple of months so that we can see issues are the hard ones. > > It would be good to have a common set of scenarios which could be > tested by simulation, emulation and real networks. Obviously, the > simulation is the most flexible, and so it may have a larger set of > tests, but we can at least simulate the emulated cases. > > Ulterior motive: I'd like people also to simulate/emulate the > scenarios that can also be tested on WAN-in-Lab :) > > If this works, we can get more ambitious in a second roundtable > elsewhere. > > Cheers, > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > > > ------------------------------ > > Message: 2 > Date: Mon, 20 Aug 2007 21:36:41 -0700 > From: "Lachlan Andrew" > Subject: Re: [Tmrg] TCP evaluation suite round-table > To: tmrg-interest at ICSI.Berkeley.EDU > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Greetings again, > > On 20/08/07, Lachlan Andrew wrote: >> As basic goals, I'd like to come away from the roundtable with: >> - a set of bandwidths that are of interest, say 10, 155, 622, 2500 >> Mbps >> >> I think some of those will be "easy", and we can sort them out on the >> list before an interactive meeting, to save time for more debatable >> ones. > > To try my hypothesis, I think that bandwidths should be easy to agree > on on-list. > > Obvious candidates are: > 10 Mbit/s -- old Ethernet, the right ball-part for current ADSL/cable > 54Mbit/s -- 802.11a/g > 100Mbit/s -- Fast Ethernet > 155Mbit/s -- OC3/STM-1 > 400Mbit/s -- used by Doug and Injong's Dummynet studies (IIRC) > 622Mbit/s -- OC12/STM-4 > 1000Mbit/s -- GbE > 2488Mbit/s -- OC48/STM-16 > 9952Mbit/s -- OC192/STM-64 > 10Gbit/s -- 10GbE > > Can we assume that Moore's law now allows Dummynet to run at > 622Mbit/s? If so, I'd strike 400Mbit/s off in favour of OC12. > > With my WAN-in-Lab hat on, I'd vote for GbE, OC48 and 10GbE, since we > have those. > > Can we agree on using > > 10Mbit/s > 100Mbit/s > 622Mbit/s > 1000Mbit/s > 2488Mbit/s > 10Gbit/s > > in all simulations/experiments, unless there is a reason to deviate? > > Cheers, > Lachlan > > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > > > ------------------------------ > > Message: 3 > Date: Mon, 20 Aug 2007 22:06:38 -0700 > From: "Lachlan Andrew" > Subject: Re: [Tmrg] TCP evaluation suite round-table > To: tmrg-interest at ICSI.Berkeley.EDU > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Greetings again, > > On 20/08/07, Lachlan Andrew wrote: > >> - a measure of total link utilization > > Again, I think this should be easy to agree on. > > Rather than simply measuring total bit/s over the link (which can be > achieved by inducing congestion collapse), I would advocate using the > sum of the *receive* rates of the flows using each link. > > For multi-link topologies, this would count the rate of multi-hop > flows multiple times, and so is different from the "total network > throughput" (sum of all flow rates). It is also more "useful" than > measuring the total network throughput, since the latter is maximized > by giving all capacity to any single-hop flows using a link. > > Thoughts? > > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > > > ------------------------------ > > Message: 4 > Date: Tue, 21 Aug 2007 12:12:07 +0300 > From: Lars Eggert > Subject: Re: [Tmrg] TCP evaluation suite round-table > To: l.andrew at ieee.org > Cc: tmrg-interest at ICSI.Berkeley.EDU > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > Hi, > > On 2007-8-21, at 7:36, ext Lachlan Andrew wrote: >> Can we assume that Moore's law now allows Dummynet to run at >> 622Mbit/s? If so, I'd strike 400Mbit/s off in favour of OC12. > > during my thesis, I found that dummynet doesn't simulate high- > datarate paths very accurately anymore once the CPU becomes loaded: > > However, simulating wide-area Gigabit links with Dummynet > is problematic [ZEC2003]. Dummynet uses the kernel firewall > to identify packets for processing, and depends heavily on > the kernel timers to control when packets leave the > transmission buffer. Both mechanisms incur significant > overheads at high data rates. Furthermore, high data rates > cause high interrupt loads, which can decrease system > responsiveness and eventually lead to livelock [MOGUL1997]. > Because Dummynet processing occurs at the IP layer, device > interrupts cause delays that reduce the accuracy of the > simulation. These delays can also interfere with user-space > processing, and as a result affect the benchmark processes > themselves. > > [ZEC2003] Marko Zec and Miljenko Mikuc. Real-Time IP Network > Simulation at Gigabit Data Rates. Proc. International > Conference on Telecommunications (ConTEL), Zagreb, > Croatia, June 11-13, 2003. > > As you said, Moore's law may have pushed up the region of bandwidth > that can be accurately simulated, but it'd be good to have some > verification. > > Lars > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: smime.p7s > Type: application/pkcs7-signature > Size: 2446 bytes > Desc: not available > Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/ > attachments/20070821/c6095385/attachment-0001.bin > > ------------------------------ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > End of Tmrg-interest Digest, Vol 11, Issue 4 > ******************************************** From: lars.eggert at nokia.com (Lars Eggert) Date: Wed, 22 Aug 2007 11:31:19 +0300 Subject: [Tmrg] Tmrg-interest Digest, Vol 11, Issue 4 In-Reply-To: References: Message-ID: <29750F5F-0B96-4C03-B3AE-0CDD8DA21C6E@nokia.com> On 2007-8-22, at 10:26, ext Douglas Leith wrote: > Re dummynet, my experience these days is that it can run up to 1Gb > using modern hardware. In my experience more of an issue is that end > hosts can still have difficulty at high bandwidth-delay products due > to sack processing overhead etc. and it is this that has placed the > upper limit on test speeds rather than anything else. I'm not sure > where things break these days but it would be easy enough to check. It may be interesting to compare the results in a dummynet setup with one that uses real hardwarde (Lachlan's setup, for example.) What I saw a few years back was that dummynet bunched together packets in bursts, due to the way device interrupts were handled by BSD at the time. Essentially, device driver processing interrupts IP- layer processing, and so dummynet only got cycles intermittently. The result was that although you'd get a simulated path that had the desired bandwidth/delay properties over longer timescales (>> CPU quantum), if you looked at the packet-level trace, there were some oddities there. For my stuff (queueing), that caused some issues, but I'm not sure if it'd matter much for TCP evaluation. Lars -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2446 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070822/3c3b302d/attachment.bin From: doug.leith at nuim.ie (Douglas Leith) Date: Wed, 22 Aug 2007 13:45:29 +0100 Subject: [Tmrg] Tmrg-interest Digest, Vol 11, Issue 4 In-Reply-To: <29750F5F-0B96-4C03-B3AE-0CDD8DA21C6E@nokia.com> References: <29750F5F-0B96-4C03-B3AE-0CDD8DA21C6E@nokia.com> Message-ID: On 22 Aug 2007, at 09:31, Lars Eggert wrote: > On 2007-8-22, at 10:26, ext Douglas Leith wrote: >> Re dummynet, my experience these days is that it can run up to 1Gb >> using modern hardware. In my experience more of an issue is that end >> hosts can still have difficulty at high bandwidth-delay products due >> to sack processing overhead etc. and it is this that has placed the >> upper limit on test speeds rather than anything else. I'm not sure >> where things break these days but it would be easy enough to check. > > It may be interesting to compare the results in a dummynet setup > with one that uses real hardwarde (Lachlan's setup, for example.) Sounds like a good idea. > What I saw a few years back was that dummynet bunched together > packets in bursts, due to the way device interrupts were handled by > BSD at the time. Essentially, device driver processing interrupts > IP-layer processing, and so dummynet only got cycles > intermittently. The result was that although you'd get a simulated > path that had the desired bandwidth/delay properties over longer > timescales (>> CPU quantum), if you looked at the packet-level > trace, there were some oddities there. For my stuff (queueing), > that caused some issues, but I'm not sure if it'd matter much for > TCP evaluation. Makes sense. Doug From: ldunn at cisco.com (Lawrence D. Dunn) Date: Thu, 23 Aug 2007 12:22:33 -0500 Subject: [Tmrg] Fwd: Re: TCP evaluation suite round-table Message-ID: tmrg-interest folks, In response to Lachlan's post on an evaluation suite round-table, I sent him a couple thoughts unicast. Lachlan felt they might be relevant/interesting to the list members (I wasn't totally sure) ;-) so I'm forwarding my note, below. I'll also send his quite-thoughtful reply in a second... Larry -- >Date: Tue, 21 Aug 2007 09:08:33 -0500 >To: l.andrew at ieee.org, ld >From: "Lawrence D. Dunn" >Subject: Re: [Tmrg] TCP evaluation suite round-table >Cc: >Bcc: >X-Attachments: > >Lachlan, (unicast), > ( I almost sent this to the list, but figured, since parts sound > like a bit of a tangential ramble, I'd start with a unicast, > and if I feel the same way in a day or so, maybe send it to the list... ;-) > > I think this one *might* generate some discussion/debate. > > For example, your last sentence (..."useful"...) seems to imply > that giving all capacity to single-hop flows is somehow bad, or wrong. > Though I probably agree, it seems that maybe this connects > "utilization" to some notion of "fairness" (i.e. on what basis > have we concluded that behavior-X is bad/wrong/misleading?) > Are utilization and fairness meant to be coupled, or orthogonal, > or varies-by-scheme-and-that's-OK? > > Also, it might be worth considering, for link utilization, > whether a single-valued metric is the right choice. > For example, is a "square wave" w/ average utilization > of 50% somehow "better" or "worse" that a somewhat-noisy > utilization that seems to hover near 50%? Do we care? > Maybe we should stay away from better/worse judgements at this > early stage, but a single-valued metric (average of the sums > or sum of averages in your example below? ) > probably means that we can't tell the difference. > On one hand, maybe it's best not to complicate things. > On another hand, maybe adding standard-deviation, > or some other metric(s)-of-your-choice, might at least help > capture differentiation between two behaviors > that is useful. > > Or not. Maybe I'm over-thinking it. ;-) > > Is it correct to assume that we're excluding exotica like some > TCP proxy that "fans-out" a single flow, multicast-fashion, > to multiple downstream receivers? I suspect that if such > a proxy were placed after the bottleneck, it might wreak havoc > with the sum-of-receive-rates approach. > (East counter-point: "well, that's not really TCP" - perhaps so, > but it makes me think that the tighter we can bound what > is in/out of the measurement bounds, the fewer disputes > we might have later on. OTOH, tighter-bounds might stifle > some cool/creative approaches, so maybe it's best to not > worry about it). > > Having hinted at conflicting sides of most/all points, I'll > try to be quiet for a bit... ;-) > >Best regards, >Larry >-- > >At 10:06 PM -0700 8/20/07, Lachlan Andrew wrote: >>Greetings again, >> >>On 20/08/07, Lachlan Andrew wrote: >> >>> - a measure of total link utilization >> >>Again, I think this should be easy to agree on. >> >>Rather than simply measuring total bit/s over the link (which can be >>achieved by inducing congestion collapse), I would advocate using the >>sum of the *receive* rates of the flows using each link. >> >>For multi-link topologies, this would count the rate of multi-hop >>flows multiple times, and so is different from the "total network >>throughput" (sum of all flow rates). It is also more "useful" than >>measuring the total network throughput, since the latter is maximized >>by giving all capacity to any single-hop flows using a link. >> >>Thoughts? >> >>Lachlan >> >>-- >>Lachlan Andrew Dept of Computer Science, Caltech >>1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >>Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >>_______________________________________________ >>Tmrg-interest mailing list >>Tmrg-interest at ICSI.Berkeley.EDU >>http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: ldunn at cisco.com (Lawrence D. Dunn) Date: Thu, 23 Aug 2007 12:23:20 -0500 Subject: [Tmrg] Fwd: Re: TCP evaluation suite round-table Message-ID: tmrg-interest folks, Here's Lachlan's reply (with his permission). Larry -- >X-from-outside-Cisco: 64.233.184.228 >X-IronPort-Anti-Spam-Filtered: true >X-IronPort-Anti-Spam-Result: Ao8CANusykZA6bjknmdsb2JhbAASjXsBAQIHBAYPGIkzAiQ >X-IronPort-AV: i="4.19,290,1183359600"; > d="scan'208"; a="37870124:sNHT18213240" >DKIM-Signature: a=rsa-sha1; c=relaxed/relaxed; > d=gmail.com; s=beta; > >h=domainkey-signature:received:received:message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; > >b=heDStVgnVKrH8deXPyWpHuWTsBnuo6gOKIYBO3uCwI5yGkQtEIgE6HtmwyMj0SyqCFKSzK+0wBZUbD3QxPxfL8Nofw+ZOdP/2wqic2nvA+oIqqzgPfFv9+iFjUIetiMPl1iTQUliMK7P34DRpZNIN3ZoXUwoBgsq4PbSc19QCWI= >Date: Tue, 21 Aug 2007 09:15:04 -0700 >From: "Lachlan Andrew" >Reply-To: l.andrew at ieee.org >To: "Lawrence D. Dunn" , "David Hayes" >Subject: Re: [Tmrg] TCP evaluation suite round-table >Authentication-Results: sj-dkim-3; >header.From=lachlan.andrew at gmail.com; dkim=pass ( > sig from gmail.com/beta verified; ); > >Greetings Larry, > >On 21/08/07, Lawrence D. Dunn wrote: >> Lachlan, (unicast), >> ( I almost sent this to the list, but figured, since parts sound >> like a bit of a tangential ramble, I'd start with a unicast, >> and if I feel the same way in a day or so, maybe send it to the >>list... ;-) > >As always, you raise some really good points. The list would benefit >from all of them. > >> For example, your last sentence (..."useful"...) seems to imply >> that giving all capacity to single-hop flows is somehow bad, or wrong. >> Though I probably agree, it seems that maybe this connects >> "utilization" to some notion of "fairness" (i.e. on what basis >> have we concluded that behavior-X is bad/wrong/misleading?) >> Are utilization and fairness meant to be coupled, or orthogonal, >> or varies-by-scheme-and-that's-OK? > >My personal opinion is that Frank Kelly was spot on. Fairness and >utilization are intimately coupled, and it is easy to evaluate them >together if we can agree on what we're trying to achieve. > >I agree that we should steer away from value judgements at the moment. > However, if we're trying to decide on a small set of measurements >from which we *can* make value judgements, then I think we should >avoid introducing misleading coupling of quantities like fairness >and throughput. > >A metric which is clearly positively correlated with one quantity >(increases with throughput) and negatively correlated with another >(decreases with fairness) is IMO going to be less useful in eventually >making those value judgements, than one which is positively correlated >with one quantity and *independent* of the other. > >If we're going to introduce a measure which is dependent on both >throughput and fairness, I'd advocate "aggregate utility", which >increases with both. > >> Also, it might be worth considering, for link utilization, >> whether a single-valued metric is the right choice. >> For example, is a "square wave" w/ average utilization >> of 50% somehow "better" or "worse" that a somewhat-noisy >> utilization that seems to hover near 50%? Do we care? >> Maybe we should stay away from better/worse judgements at this >> early stage, but a single-valued metric (average of the sums >> or sum of averages in your example below? ) >> probably means that we can't tell the difference. >> On one hand, maybe it's best not to complicate things. >> On another hand, maybe adding standard-deviation, >> or some other metric(s)-of-your-choice, might at least help >> capture differentiation between two behaviors >> that is useful. >> >> Or not. Maybe I'm over-thinking it. ;-) > >Again, a very good point. On question to ask would be why variation >in utilization is bad. To me, it is only bad if it has a measurable >effect on some quantity that applications care about, like loss, >jitter or expected file transfer time. That quantity may be >experienced by the "big" TCP flow, or by CBR cross traffic, etc. >Rather than measuring variation for its own sake, I'd prefer to find >metrics which measure its harm. > >> Is it correct to assume that we're excluding exotica like some >> TCP proxy that "fans-out" a single flow, multicast-fashion, >> to multiple downstream receivers? I suspect that if such >> a proxy were placed after the bottleneck, it might wreak havoc >> with the sum-of-receive-rates approach. > >I agree that it would mean that the sum-of-receive-rates could be much >greater than the link capacity, but I see that as appropriate and in >fact a big benefit of this approach. > >This is very related to my discussions with Michael Welzl on iccrg >about rate-given-corruption (did you see them?). If a flow is giving >benefit to two users, to me it is twice as useful as one giving >benefit only to one, and "should" be given a greater share of the >bottleneck resource. Again, how much more depends on what "utility" >we're trying to maximize, and how much it "costs" the network to >transport the data. > >> (East counter-point: "well, that's not really TCP" - perhaps so, >> but it makes me think that the tighter we can bound what >> is in/out of the measurement bounds, the fewer disputes >> we might have later on. OTOH, tighter-bounds might stifle >> some cool/creative approaches, so maybe it's best to not >> worry about it). > >Yes. My concern with what is in/out is routing. Maximizing the sum >of (link_count*throughput) encourages longer paths. If we're dealing >with fixed routing, then I think we should try to come up with metrics >that are meaningful for all protocols which decide when and how much >to transmit, whether TCP or not. > >Feel free to forward any of this to the list. > >Cheers, >Lachlan > >-- >Lachlan Andrew Dept of Computer Science, Caltech >1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: sallyfloyd at mac.com (Sally Floyd) Date: Wed, 29 Aug 2007 20:55:34 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: <7dd379fa8c871c6bfe026828dd76c716@mac.com> > Can we agree on using > > 10Mbit/s > 100Mbit/s > 622Mbit/s > 1000Mbit/s > 2488Mbit/s > 10Gbit/s > > in all simulations/experiments, unless there is a reason to deviate? That sounds reasonable to me, for basic scenarios. It is probably also worthwhile to look at a scenario with a congested low-bandwidth access link (dial-up, since it still exists), and with a low-bandwidth congested wireless link (1-2 Mbps), assuming that those are still around. Because it is good to researchers to also explore what would happen if someone uses a particular congestion control mechanism over a very-low-bandwidth path. - Sally http://www.icir.org/floyd/ From: sallyfloyd at mac.com (Sally Floyd) Date: Wed, 29 Aug 2007 21:14:18 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: Lachlan - > Rather than simply measuring total bit/s over the link (which can be > achieved by inducing congestion collapse), I would advocate using the > sum of the *receive* rates of the flows using each link. I think that is a fine idea. But it would be easy for each simulation or experiment to also report the total bps over the link, in each direction. E.g., to compare with the aggregate receive rates. Or to allow "local" metrics about throughput vs. delay vs. drop rates for understanding the queue management. Or some such. - Sally http://www.icir.org/floyd/ From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 4 Sep 2007 15:38:28 -0700 Subject: [Tmrg] [IRSG] draft-irtf-tmrg-metrics-10 IRSG poll In-Reply-To: <20070904073634.GA10956@elstar.local> References: <20070904073634.GA10956@elstar.local> Message-ID: Juergen - > My vote is "Ready to publish" with the following comments: > > a) I think the acronym for the Transport Modeling Research Group is > TMRG and not TRMG (shows up multiple times in the ID). Oops. Thanks, fixed. > b) Section 2.3.1 talks about fairness metrics and introduces Jain's > fairness without saying what x_i actually stands for. Further down > in the discussion of the product measure, we read: > > [...] For our purposes, let x_i be the > throughput for the i-th connection. (In other contexts x_i is > taken > as the power of the i-th connection, and the product measure is > referred to as network power.) > > This text leaves it open whether this definition of x_i only > applies to the discussion of the product measure or also to other > places in the document (like Jain's fairness). I think this should > be clarified. Done. (I moved the definition of x_i so that it becomes before Jain's fairness metric.) > In the discussion of epsilon-fairness, we then read: > > where x_i is the resource allocation to the i-th flow or user. > > I am wondering how resource allocation is actually defined / > measured. Is it the fraction of the bandwidth allocated to the > flow? In the paper where epsilon-fairness is defined, they refer to x_i as the sending rate. I have clarified. > c) On page 11: s/that TCP/than TCP/ Thanks, fixed.c Many thanks for the feedback. The revised version of the draft is at: http://www.icir.org/floyd/papers/draft-irtf-tmrg-metrics-11a.txt - Sally http://www.icir.org/floyd/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070904/891b2378/attachment.html From: iyengar at mail.eecis.udel.edu (Janardhan Iyengar) Date: Wed, 05 Sep 2007 11:46:40 -0400 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: <46DECF60.7020500@mail.eecis.udel.edu> Hi Lachlan/all, > Can we agree on using > > 10Mbit/s > 100Mbit/s > 622Mbit/s > 1000Mbit/s > 2488Mbit/s > 10Gbit/s > > in all simulations/experiments, unless there is a reason to deviate? I second Doug's point about the endpoint becoming the bottleneck in experiments. We recently did some work trying to saturate 2 GigE links using 3.2 GHz Pentium-4 processors (hyperthreading OFF) with jumbograms, and we recognized two bottlenecks that were very close: 1/ approaching CPU capacity at ends 2/ motherboard backplane capacity (there was also something about the PCI/PCI-express bus limits that I cannot quite remember...) The backplane limit is not an unsurmountable problem, but I wanted to point out that testing with bandwidths beyond 1000Mbit/s may not be feasible in some cases. regards, - jana -- Janardhan R. Iyengar Visiting Assistant Professor Connecticut College http://cs.conncoll.edu/iyengar/ From: iyengar at mail.eecis.udel.edu (Janardhan Iyengar) Date: Wed, 05 Sep 2007 11:55:49 -0400 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: <46DED185.4040408@mail.eecis.udel.edu> Hi all, Suggested addition to this list: - set of Path MTUs of interest - 1500 bytes, 9000 bytes, (maybe 576 bytes? others?) thanks, - jana Lachlan Andrew wrote: > Greetings all, > > On 8-9 November, a few of us will be getting together at Caltech for a > "round table" to try to agree on some basic parameters and metrics for > TCP evaluation. > > We won't try to answer things like "what fairness metric is best", but > we can agree on some basic parameters. The situation we're trying to > avoid is: > > Group A finds that at 500Mbps, flow 1 reaches 10% of its final > throughput after 30s > Group B finds that at 622Mbps, flow 1 reaches 20% of its final > throughput after 20s > > I'll try to have live video-conferencing via VRVS > so thta those who can't come in > person can still participate. Unfortunately, our videoconferencing > room is small, and so physical attendence will probably be limited to > a dozen or so people. Please let me know if you're interested. > > As basic goals, I'd like to come away from the roundtable with: > - a set of bandwidths that are of interest, say 10, 155, 622, 2500 Mbps > - a set of buffer sizes that are of interest, like BDP or 16384 packets > - a set of distributions of RTT that are of interest > - an agreed notion of "convergence time" > -- e.g., "the average over period x is within y% of the > final average > - an agreed notion of "time to converge to fairness" > -- e.g., "the ratio of averages over period x is within y% > of the final ratio" > -- should this metric depend on the final ratio achieved? > - an agreed notion of "intraflow variability" > -- e.g., what timescales are of interest? > - an agreed set of traffic models for background traffic > > Injong has added to that list: > - a measure of total link utilization > - fluctuation in utilization due to fluctuation in background traffic. > - What is the per-flow fair bandwidth share? > > I think some of those will be "easy", and we can sort them out on the > list before an interactive meeting, to save time for more debatable > ones. It would be good if everyone can throw in some ideas for the > next couple of months so that we can see issues are the hard ones. > > It would be good to have a common set of scenarios which could be > tested by simulation, emulation and real networks. Obviously, the > simulation is the most flexible, and so it may have a larger set of > tests, but we can at least simulate the emulated cases. > > Ulterior motive: I'd like people also to simulate/emulate the > scenarios that can also be tested on WAN-in-Lab :) > > If this works, we can get more ambitious in a second roundtable elsewhere. > > Cheers, > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest -- Janardhan R. Iyengar Visiting Assistant Professor Connecticut College http://cs.conncoll.edu/iyengar/ From: fred at cisco.com (Fred Baker) Date: Wed, 5 Sep 2007 10:42:36 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: <46DECF60.7020500@mail.eecis.udel.edu> References: <46DECF60.7020500@mail.eecis.udel.edu> Message-ID: I think you're really missing the boat if you don't test at sub- megabit speeds as well. I could tell stories, like the one in Malawi (deepest darkest Africa) in which a service provider that sold radio links that they reduced to 64 KBPS by throwing away any traffic that arrived faster than that. At least look at 2 MBPS and 256 KBPS. Your TCP variant should run at the low speeds as well as the high ones. On Sep 5, 2007, at 8:46 AM, Janardhan Iyengar wrote: > Hi Lachlan/all, > >> Can we agree on using >> >> 10Mbit/s >> 100Mbit/s >> 622Mbit/s >> 1000Mbit/s >> 2488Mbit/s >> 10Gbit/s >> >> in all simulations/experiments, unless there is a reason to deviate? > > I second Doug's point about the endpoint becoming the bottleneck in > experiments. We recently did some work trying to saturate 2 GigE > links using 3.2 GHz Pentium-4 processors (hyperthreading OFF) with > jumbograms, and we recognized two bottlenecks that were very close: > 1/ approaching CPU capacity at ends > 2/ motherboard backplane capacity > > (there was also something about the PCI/PCI-express bus limits that > I cannot quite remember...) > > The backplane limit is not an unsurmountable problem, but I wanted > to point out that testing with bandwidths beyond 1000Mbit/s may not > be feasible in some cases. > > regards, > - jana > > -- > Janardhan R. Iyengar > Visiting Assistant Professor > Connecticut College > http://cs.conncoll.edu/iyengar/ > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 5 Sep 2007 10:56:57 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: <46DECF60.7020500@mail.eecis.udel.edu> Message-ID: Greetings Fred, Yes, Sally had already suggested testing down to 56kbit/s (Thanks Sally!). Most proposals follow HS-TCP and revert to Reno in these cases, but it would certainly be good to check that the implementations work properly. Cheers, Lachlan On 05/09/07, Fred Baker wrote: > I think you're really missing the boat if you don't test at sub- > megabit speeds as well. I could tell stories, like the one in Malawi > (deepest darkest Africa) in which a service provider that sold radio > links that they reduced to 64 KBPS by throwing away any traffic that > arrived faster than that. At least look at 2 MBPS and 256 KBPS. Your > TCP variant should run at the low speeds as well as the high ones. -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 5 Sep 2007 11:03:32 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: Greetings Sally, On 29/08/07, Sally Floyd wrote: > Lachlan wrote > > > I would advocate using the > > sum of the *receive* rates of the flows using each link. > > I think that is a fine idea. > > But it would be easy for each simulation or experiment to also > report the total bps over the link, in each direction. E.g., to > compare with the aggregate receive rates. Or to allow "local" > metrics about throughput vs. delay vs. drop rates for understanding > the queue management. Or some such. True, that would be useful. I certainly wouldn't want people to think they should only do a subset of what we agree on. On the other hand, I would want them to feel obliged to do a superset either... Perhaps the goal should be to find a list of "types" of quantities to measure (like link utilization), and say "*if* you're going to measure quantity X, please include metric Y along with any others that you choose" and "*if* you're interested in bandwidths in this range, please include bandwidths from the following list". Do you have any recommendations? I'll check the wording in your current draft... Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 5 Sep 2007 11:32:33 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: <46DECF60.7020500@mail.eecis.udel.edu> References: <46DECF60.7020500@mail.eecis.udel.edu> Message-ID: Greetings Jana, On 05/09/07, Janardhan Iyengar wrote: > > I second Doug's point about the endpoint becoming the bottleneck in experiments. We recently did some work trying to saturate 2 GigE links using 3.2 GHz Pentium-4 processors (hyperthreading OFF) with jumbograms, and we recognized two bottlenecks that were very close: > 1/ approaching CPU capacity at ends > 2/ motherboard backplane capacity Yes, there certainly problems getting above 1 Gbps. (More below for those who are intersted in Linux.) > (there was also something about the PCI/PCI-express bus limits that I cannot quite remember...) PCI-express should have no problem. PCI-X can handle up to 7Gbit/s if it is working properly, although I have some buggy cards which limit it to 5Gbit/s. I'm not sure about regular PCI. > testing with bandwidths beyond 1000Mbit/s may not be feasible in some cases. True. However we can specify a wider range of tests than can currently be performed, so that (a) simulations can be consistent (b) when people *do* get fast enough systems, they can be consistent with those simulations. The main question I was have is not about hardware testing at >1Gbps, but whether we can make the more modest move of replacing the non-standard 400Mbit/s (used for Dummynet) by 622Mbit/s so that it can be compared with OC12 hardware. For people interested in Linux: Prompted by Doug's and Lars's comments, I've also just been doing some experiments, and had a case where it took about 4 minutes to do a __release_sock because of SACK processing (mainly tcp_sacktag_write_queue). I didn't check the CPU at the time, unfortunately, but I assume it was high.... Doug suggested looking at /proc/net/softnet_stat to look for packet loss at the driver level, but I didn't observe any -- perhaps ACK clocking was working as it should! I'm currently looking to see if some of this can be helped by dropping SACKs when the backlog is too high. Would that just drop fast-path SACKs? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: weddy at grc.nasa.gov (Wesley Eddy) Date: Wed, 5 Sep 2007 14:46:07 -0400 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: <20070905184607.GD28137@grc.nasa.gov> On Wed, Sep 05, 2007 at 10:42:36AM -0700, Fred Baker wrote: > I think you're really missing the boat if you don't test at sub- > megabit speeds as well. I could tell stories, like the one in Malawi > (deepest darkest Africa) in which a service provider that sold radio > links that they reduced to 64 KBPS by throwing away any traffic that > arrived faster than that. At least look at 2 MBPS and 256 KBPS. Your > TCP variant should run at the low speeds as well as the high ones. > Many (most? all?) of the proposals drop back into 2581 behavior if the cwnd is "small", so I (sort of) disagree and think it's alright to focus on the faster rates. I hedge that with "sort of", because I'm assuming that all designs fall back into the well-understood legacy behavior in low bandwidth-delay product scenarios or at least could easily be made to do so, and I don't know if that's a faulty assumption. -- Wesley M. Eddy Verizon Federal Network Systems From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 5 Sep 2007 11:58:40 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: <20070905184607.GD28137@grc.nasa.gov> References: <20070905184607.GD28137@grc.nasa.gov> Message-ID: Greetings Wes, On 05/09/07, Wesley Eddy wrote: > > Many (most? all?) of the proposals drop back into 2581 behavior if the > cwnd is "small", so I (sort of) disagree and think it's alright to focus > on the faster rates. I hedge that with "sort of", because I'm assuming > that all designs fall back into the well-understood legacy behavior in > low bandwidth-delay product scenarios or at least could easily be > made to do so, and I don't know if that's a faulty assumption. Good point; "high speed" protocols do that. However, if we want this test suite to apply to testing things like LT-TCP or LP-TCP, then they won't necessarily revert to standard behaviour. I say there's no harm in extending the range of rates in the "preferred list". Nothing forces people to use *all* rates in the list. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: iyengar at mail.eecis.udel.edu (Janardhan Iyengar) Date: Wed, 05 Sep 2007 15:08:37 -0400 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: <20070905184607.GD28137@grc.nasa.gov> References: <20070905184607.GD28137@grc.nasa.gov> Message-ID: <46DEFEB5.8070808@mail.eecis.udel.edu> Hey Wes, > Many (most? all?) of the proposals drop back into 2581 behavior if the > cwnd is "small", so I (sort of) disagree and think it's alright to focus The point, I think, is to ensure that *all* proposals, *not most*, work at low bandwidth rates too. For most proposals, it might just be a question of making sure that when optimizing for other conditions, nothing broke with the oft-neglected low bandwidth case. But it is a test that needs to be run. regards, - jana -- Janardhan R. Iyengar Visiting Assistant Professor Connecticut College http://cs.conncoll.edu/iyengar/ From: fred at cisco.com (Fred Baker) Date: Wed, 5 Sep 2007 12:13:23 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: <20070905184607.GD28137@grc.nasa.gov> References: <20070905184607.GD28137@grc.nasa.gov> Message-ID: it's supposed to do that, so we won't test for that. I'll pass that thought along to Cisco dev-test. They'll appreciate it. It will reduce their work dramatically. On Sep 5, 2007, at 11:46 AM, Wesley Eddy wrote: > On Wed, Sep 05, 2007 at 10:42:36AM -0700, Fred Baker wrote: >> I think you're really missing the boat if you don't test at sub- >> megabit speeds as well. I could tell stories, like the one in Malawi >> (deepest darkest Africa) in which a service provider that sold radio >> links that they reduced to 64 KBPS by throwing away any traffic that >> arrived faster than that. At least look at 2 MBPS and 256 KBPS. Your >> TCP variant should run at the low speeds as well as the high ones. >> > > > Many (most? all?) of the proposals drop back into 2581 behavior if the > cwnd is "small", so I (sort of) disagree and think it's alright to > focus > on the faster rates. I hedge that with "sort of", because I'm > assuming > that all designs fall back into the well-understood legacy behavior in > low bandwidth-delay product scenarios or at least could easily be > made to do so, and I don't know if that's a faulty assumption. > > -- > Wesley M. Eddy > Verizon Federal Network Systems From: weddy at grc.nasa.gov (Wesley Eddy) Date: Wed, 5 Sep 2007 15:40:00 -0400 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: <20070905184607.GD28137@grc.nasa.gov> Message-ID: <20070905194000.GE28137@grc.nasa.gov> On Wed, Sep 05, 2007 at 12:13:23PM -0700, Fred Baker wrote: > it's supposed to do that, so we won't test for that. > > I'll pass that thought along to Cisco dev-test. They'll appreciate > it. It will reduce their work dramatically. > Understood :), though I'd thought his effort was for evaluating the merits of various proposals on common agreed-upon configurations, not for debugging the implementations which should hopefully be done well before the trials used for comparisons. Debugging requires checking a lot of scheme-specific cases, loss patterns, and edge-cases ... if the desire is to make a debugging checklist, this doesn't even scratch the surface. Lachlan's point that LT-TCP and others might behave differently at low rates is a better reason, and I buy his argument. -- Wesley M. Eddy Verizon Federal Network Systems From: sallyfloyd at mac.com (Sally Floyd) Date: Thu, 6 Sep 2007 11:49:09 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: <20070905184607.GD28137@grc.nasa.gov> References: <20070905184607.GD28137@grc.nasa.gov> Message-ID: <2277ed41b75d22c844c751fdb716263f@mac.com> > Many (most? all?) of the proposals drop back into 2581 behavior if the > cwnd is "small", so I (sort of) disagree and think it's alright to > focus > on the faster rates. I assume that the test scenarios will also be used for proposed changes such as Quick-Start and other proposals for faster start-up, and those should definitely be tested on a very wide range of path bandwidths. (This is largely orthogonal to whether the congestion control behavior drops back to 2581.) - Sally http://www.icir.org/floyd/ From: garmitage at swin.edu.au (grenville armitage) Date: Fri, 07 Sep 2007 16:42:06 +1000 Subject: [Tmrg] Logging active TCP details in FreeBSD 5, 6 and 7 Message-ID: <46E0F2BE.8090405@swin.edu.au> All, On the off chance this is of general interest, I'd like to let people know of a FreeBSD kernel module we've developed for logging various TCP state variables in a running kernel while sessions are active. Called SIFTR ("sifter"), we built this for our own research into precisely how a FreeBSD TCP stack behaves when faced with real and artificial (e.g. dummynet) paths. Figured it might also be of interest to others. See http://caia.swin.edu.au/urp/newtcp/tools.html (under SIFTR) for a readme, changelog and tarball. The authors would love to get feedback from anyone trying it out. (SIFTR has been developed and tested mostly under FreeBSD 6.2, but we believe it'll be stable under 5.x and 7.x too.) cheers, gja From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Fri, 7 Sep 2007 08:22:23 -0700 Subject: [Tmrg] Logging active TCP details in FreeBSD 5, 6 and 7 In-Reply-To: <46E0F2BE.8090405@swin.edu.au> References: <46E0F2BE.8090405@swin.edu.au> Message-ID: Greetings Grenville, Would it be possible to give it an interface like Web100? We're currently building our benchmarking suite to use (a slightly optimized) Web100 to monitor system internals, and it would be great to be able to use SIFTR without too much modification of our suite. Cheers, Lachlan On 06/09/2007, grenville armitage wrote: > All, > > On the off chance this is of general interest, I'd like to let > people know of a FreeBSD kernel module we've developed for > logging various TCP state variables in a running kernel > while sessions are active. > > Called SIFTR ("sifter"), we built this for our own research into > precisely how a FreeBSD TCP stack behaves when faced with real > and artificial (e.g. dummynet) paths. Figured it might also be > of interest to others. > > See http://caia.swin.edu.au/urp/newtcp/tools.html (under SIFTR) > for a readme, changelog and tarball. The authors would love to get > feedback from anyone trying it out. (SIFTR has been developed and > tested mostly under FreeBSD 6.2, but we believe it'll be stable > under 5.x and 7.x too.) > > cheers, > gja -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: ccardena at itesm.mx (=?ISO-8859-15?Q?C=E9sar=20C=E1rdenas?=) Date: Thu, 13 Sep 2007 18:41:22 +0200 Subject: [Tmrg] delay and gooput fairness Message-ID: <46D43F2D000175C4@mailserver1.itesm.mx> Dear all, It there any measure for "delay fairness" or "goodput fairness" of a TCP flow? If yes, I would appreciate if you point me to a good reference, I apologize if these both lists are not adequate for this question. Please accept my best regards, C?sar C?sar C?rdenas-P?rez (ccardena at itesm.mx) Monterrey Tech, Quer?taro Campus, M?xico http://www.qro.itesm.mx Personal Phone: +(33) 633306689 Office Phone: +(33) 145817146 Office Fax: +(33) 145813119 Skype pseudo: cesarcardenas7 All phones and fax from abroad France The content of this data transmission is not considered as an offer, proposal, understanding, or agreement unless it is confirmed in a document signed by a legal representative of ITESM. The content of this data transmission is confidential and it is intended to be delivered only to the addresses, therefore, it shall not be distributed and/or disclosed through any mean without the original sender's previous authorization. If you are not the addressee you are forbidden to use it, either totally or partially, for any purpose. From: ccardena at itesm.mx (=?ISO-8859-15?Q?C=E9sar=20C=E1rdenas?=) Date: Thu, 13 Sep 2007 20:02:19 +0200 Subject: [Tmrg] delay and gooput fairness In-Reply-To: <46D43F2D000175C4@mailserver1.itesm.mx> Message-ID: <46D43F2D0001791C@mailserver1.itesm.mx> Dear all, I realize I made a wrong question: Is there any measure of "delay fairness" or "goodput fairness" for a group of TCP flows served by a FQ algorithm? If yes, do you know a procedure to estimate them when you have 30 replications? Or do you have some suggestions? Many thanks in advance, C?sar >-- Mensaje Original -- >Date: Thu, 13 Sep 2007 18:41:22 +0200 >From: C?sar C?rdenas >Subject: delay and gooput fairness >Reply-To: ccardena at itesm.mx >To: tmrg-interest at icsi.berkeley.edu >Cc: iccrg at cs.ucl.ac.uk > > >Dear all, > >It there any measure for "delay fairness" or "goodput fairness" of a TCP >flow? > >If yes, I would appreciate if you point me to a good reference, > >I apologize if these both lists are not adequate for this question. > >Please accept my best regards, >C?sar > > >C?sar C?rdenas-P?rez (ccardena at itesm.mx) >Monterrey Tech, Quer?taro Campus, M?xico >http://www.qro.itesm.mx >Personal Phone: +(33) 633306689 >Office Phone: +(33) 145817146 >Office Fax: +(33) 145813119 >Skype pseudo: cesarcardenas7 >All phones and fax from abroad France > >The content of this data transmission is not considered as an offer, proposal, >understanding, or agreement unless it is confirmed in a document signed by >a legal representative of ITESM. The content of this data transmission is >confidential and it is intended to be delivered only to the addresses, therefore, >it shall not be distributed and/or disclosed through any mean without the >original sender's previous authorization. If you are not the addressee you >are forbidden to use it, either totally or partially, for any purpose. > > C?sar C?rdenas-P?rez (ccardena at itesm.mx) Monterrey Tech, Quer?taro Campus, M?xico http://www.qro.itesm.mx Personal Phone: +(33) 633306689 Office Phone: +(33) 145817146 Office Fax: +(33) 145813119 Skype pseudo: cesarcardenas7 All phones and fax from abroad France The content of this data transmission is not considered as an offer, proposal, understanding, or agreement unless it is confirmed in a document signed by a legal representative of ITESM. The content of this data transmission is confidential and it is intended to be delivered only to the addresses, therefore, it shall not be distributed and/or disclosed through any mean without the original sender's previous authorization. If you are not the addressee you are forbidden to use it, either totally or partially, for any purpose. From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Thu, 13 Sep 2007 14:52:42 -0700 Subject: [Tmrg] [Iccrg] RE: delay and gooput fairness In-Reply-To: <46D43F2D0001791C@mailserver1.itesm.mx> References: <46D43F2D000175C4@mailserver1.itesm.mx> <46D43F2D0001791C@mailserver1.itesm.mx> Message-ID: Greetigns C?sar, For delay, you could use Jain's index. (I think he proposed it in the 1989 paper "Analysis of the increase and decrease algorithms for congestion avoidance in computer networks". Could someone correct me?) Jain's index is fairly poor for goodput, but you can use Jain's index on the reciprocal of the goodput. An alternative is to use Kelly's utility maximization framework, and measure the drop in utility due to the inequality of rates. Cheers, Lachlan On 13/09/2007, C?sar C?rdenas wrote: > Dear all, > I realize I made a wrong question: > > Is there any measure of "delay fairness" or "goodput fairness" for a group > of TCP flows served by a FQ algorithm? > If yes, do you know a procedure to estimate them when you have 30 replications? > Or do you have some suggestions? > > Many thanks in advance, > C?sar > > >-- Mensaje Original -- > >Date: Thu, 13 Sep 2007 18:41:22 +0200 > >From: C?sar C?rdenas > >Subject: delay and gooput fairness > >Reply-To: ccardena at itesm.mx > >To: tmrg-interest at icsi.berkeley.edu > >Cc: iccrg at cs.ucl.ac.uk > > > > > >Dear all, > > > >It there any measure for "delay fairness" or "goodput fairness" of a TCP > >flow? > > > >If yes, I would appreciate if you point me to a good reference, > > > >I apologize if these both lists are not adequate for this question. > > > >Please accept my best regards, > >C?sar > > > > > >C?sar C?rdenas-P?rez (ccardena at itesm.mx) > >Monterrey Tech, Quer?taro Campus, M?xico > >http://www.qro.itesm.mx > >Personal Phone: +(33) 633306689 > >Office Phone: +(33) 145817146 > >Office Fax: +(33) 145813119 > >Skype pseudo: cesarcardenas7 > >All phones and fax from abroad France > > > >The content of this data transmission is not considered as an offer, proposal, > >understanding, or agreement unless it is confirmed in a document signed > by > >a legal representative of ITESM. The content of this data transmission is > >confidential and it is intended to be delivered only to the addresses, therefore, > >it shall not be distributed and/or disclosed through any mean without the > >original sender's previous authorization. If you are not the addressee you > >are forbidden to use it, either totally or partially, for any purpose. > > > > > > C?sar C?rdenas-P?rez (ccardena at itesm.mx) > Monterrey Tech, Quer?taro Campus, M?xico > http://www.qro.itesm.mx > Personal Phone: +(33) 633306689 > Office Phone: +(33) 145817146 > Office Fax: +(33) 145813119 > Skype pseudo: cesarcardenas7 > All phones and fax from abroad France > > The content of this data transmission is not considered as an offer, proposal, > understanding, or agreement unless it is confirmed in a document signed by > a legal representative of ITESM. The content of this data transmission is > confidential and it is intended to be delivered only to the addresses, therefore, > it shall not be distributed and/or disclosed through any mean without the > original sender's previous authorization. If you are not the addressee you > are forbidden to use it, either totally or partially, for any purpose. > > > _______________________________________________ > Iccrg mailing list > Iccrg at cs.ucl.ac.uk > http://oakham.cs.ucl.ac.uk/mailman/listinfo/iccrg > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: sallyfloyd at mac.com (Sally Floyd) Date: Fri, 14 Sep 2007 07:11:03 -0700 Subject: [Tmrg] [Iccrg] RE: delay and gooput fairness In-Reply-To: <46D43F2D0001791C@mailserver1.itesm.mx> References: <46D43F2D0001791C@mailserver1.itesm.mx> Message-ID: <753b4cccb2f4426a775b09527254b30e@mac.com> Cesar - > Is there any measure of "delay fairness" or "goodput fairness" for a > group > of TCP flows served by a FQ algorithm? You could look at the TMRG document "Metrics for the Evaluation of Congestion Control Mechanisms", available from the TMRG web page at "http://www.icir.org/tmrg/", for pointers to some of the fairness metrics that have been used in the past. - Sally http://www.icir.org/floyd/ From: weddy at grc.nasa.gov (Wesley Eddy) Date: Mon, 17 Sep 2007 10:07:34 -0400 Subject: [Tmrg] [Iccrg] RE: delay and gooput fairness In-Reply-To: References: <46D43F2D0001791C@mailserver1.itesm.mx> Message-ID: <20070917140734.GC6573@grc.nasa.gov> On Thu, Sep 13, 2007 at 02:52:42PM -0700, Lachlan Andrew wrote: > Greetigns C?sar, > > For delay, you could use Jain's index. (I think he proposed it in the > 1989 paper "Analysis of the increase and decrease algorithms for > congestion avoidance in computer networks". Could someone correct > me?) > It's very nicely explained in: R. Jain, D. Chiu, and W. Hawe, "A Quantitative Measure Of Fairness And Discrimination For Resource Allocation In Shared Computer Systems", DEC Research Report TR-301, September 1984. online at: http://www.cse.wustl.edu/~jain/papers/fairness.htm -- Wesley M. Eddy Verizon Federal Network Systems From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 17 Sep 2007 16:00:35 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: Lachlan - On Sep 5, 2007, at 11:03 AM, Lachlan Andrew wrote: > Greetings Sally, > On 29/08/07, Sally Floyd wrote: >> Lachlan wrote >>> I would advocate using the >>> sum of the *receive* rates of the flows using each link. >> >> I think that is a fine idea. >> >> But it would be easy for each simulation or experiment to also >> report the total bps over the link, in each direction. E.g., to >> compare with the aggregate receive rates. Or to allow "local" >> metrics about throughput vs. delay vs. drop rates for understanding >> the queue management. Or some such. > > True, that would be useful. > > I certainly wouldn't want people to think they should only do a > subset of what we agree on. On the other hand, I would want them to > feel obliged to do a superset either... > > Perhaps the goal should be to find a list of "types" of quantities to > measure (like link utilization), and say "*if* you're going to measure > quantity X, please include metric Y along with any others that you > choose" and "*if* you're interested in bandwidths in this range, > please include bandwidths from the following list". Do you have any > recommendations? I'll check the wording in your current draft... I would assume that each scenario in the evaluation suite would have a few key outputs, depending on the metrics being investigated in that scenario. E.g., the main scenarios might look at fairness, and at the tradeoffs between aggregate throughput, delay, and drop rates, and other metrics as well. For example, for looking at the tradeoffs between aggregate throughput, delay, and drop rates, I assume the scenario would include a range of traffic intensities and settings for the queue size (or target average queue size). It *might* be useful to show both application-based metrics (sum of aggregate receive rates) and router-based metrics (for the congested link in question). But for each scenario in the evaluation suite, there might be additional useful information that is easily available, and that it would be useful to have output, simply to help the user understand what is going on if the user is so inclined. That is, "extra" information that is provided, but that is not part of the key metrics being evaluated in that scenario. That the user can look at or not, as they are so inclined... - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Fri, 21 Sep 2007 13:47:19 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: Greetings Sally, On 17/09/2007, Sally Floyd wrote: > > I would assume that each scenario in the evaluation suite would > have a few key outputs. > > But for each scenario in the evaluation suite, there might be > additional useful information that is easily available. Yes, it will be good to specify the key outputs. For a particular implementation of the tests (say an NS2 simulation), other information may be easily available. At the round-table, I was hoping to start on specifying common tests that could be done at different "levels of abstraction": analysis, simulation, emulation, and also real networks. Measuring the throughput of a router port is typically only possible using SNMP statistics which are updated infrequently. For that reason, I'd vote against it as a "key output", although of course an NS2 tool would probably make it available. I'd be happy to be outvoted... Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 24 Sep 2007 14:17:47 -0700 Subject: [Tmrg] Comments on in draft-irtf-tmrg-metrics-10.txt Message-ID: Greetings Sally, Here are some further comments on draft-irtf-tmrg-metrics-10.txt. - Section 2.1.1 Throughput introduces the difference between throughput and goodput, and then says that maximizing throughput is desirable. This makes it sound as if throughput is more important than goodput, which I don't think was the intention. Perhaps replace "...because they can't be re-assembled into complete packets, and the like." by "...because they can't be re-assembled into complete packets, and the like. Except where clearly stated, this document refers to both throughput and goodput generically as 'throughput'." or rephrase the rest of the section/document in terms of maximizing goodput. - Section 2.4 Robustness for Challenging Environments defines goodput as a *fraction* of the sent data which is received, which seems to contradict the definition in 2.1.1 which is the *total amount* of data which is received. I think that the definition from 2.1.1 is more standard, and would recommend rephrasing the paragraph as: "Goodput: For wireless networks, goodput can be a useful metric, where goodput can be defined as the total amount of useful data delivered to users. A high ratio of goodput to sent data indicates an efficient use of the radio spectrum and lower interference with other users." Another couple of comments: - Section 2.1.2 Delay It might be worth mentioning that "delay" can also include the time spent queued at the sender due to window flow control, as well as at network queues and retransmissions. This is often the dominant source of socket-layer delay. (This relates to Mark Allman's recent comments that spurious timeouts have a cost in terms of window reduction which must be weighed against the reduced waiting time of short RTO.) - Section 2.1.2 Delay When discussing "router-based" vs "flow-based" delay, it might be good to mention that "router-based" delay affects competing traffic, while "flow-based" delay does not. Thus, router-based delay induced by bulk data transfer applications is important, even if they aren't interested in per-packet transfer times. Perhaps the section could be rephrased as: Like throughput, delay can be measured as a router-based metric of queueing delay over time, or as a flow-based metric in terms of per- packet transfer times. For any rate controlled transfer, the per-packet transfer time will include time between when the application generates the packet and when the protocol allows it to be first sent. For reliable transfer, the per-packet transfer time seen by the application includes the possible delay of retransmitting a lost packet. Users of bulk data transfer applications might care about per-packet transfer times only in so far as they affect the per-connection transfer time. On the other end of the spectrum, for users of streaming media, per-packet delay can be a significant concern. Note that in some cases the average delay might not capture the metric of interest to the users; for example, some users might care about the worst-case delay, or about the tail of the delay distribution. Note that queueing delay is experienced by all flows sharing a link. Thus, bulk data transfer applications should still seek to achieve low queueing delay for the benefit of cross traffic, even if not for their own benefit. - (Nit picking) [F98] unnecessarily duplicates [KMT98]. The reference to [F98] in 2.3.2 should also be replaced by [KMT98]. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 25 Sep 2007 12:36:42 -0700 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: <46DED185.4040408@mail.eecis.udel.edu> References: <46DED185.4040408@mail.eecis.udel.edu> Message-ID: Greetings all, To keep this moving, I've typed up a list of what we've discussed so far, at . Details of the venue etc are also linked to from that page. I'd like to start to finalize numbers. Could those who already expressed an interest in coming on 8-9 November please confirm that they are still coming? We still have half a dozen places at the table, so anyone else interested is cordially invited to let me know. As Wes pointed out, my current aim is not to make a comprehensive test suite, but to have a very core list of tests so that tests by different groups and using different technologies (simulation, dummynet etc) can be compared. For that reason, I'd suggest having a single MTU in the list. If we have jumbo frames, I'd prefer 4470 (the SONET MTU), which can be carried on a wider range of equipment. What do others think? Similarly, Grenville suggested off-list that we consider asymmetric bandwidths. It would be great to test that case, but I'd be inclined not to include it in the "core scenarios". Opinions? Cheers, Lachlan On 05/09/2007, Janardhan Iyengar wrote: > > Suggested addition to this list: > - set of Path MTUs of interest - 1500 bytes, 9000 bytes, (maybe 576 bytes? others?) -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 25 Sep 2007 13:07:25 -0700 Subject: [Tmrg] Round table: Buffer sizes Message-ID: Greetings all, Another question on the list is buffer sizes. I'd obviously like to standardize on the buffer sizes that WAN-in-Lab supports "natively", namely 128 packets at 1Gbps and 16384 packet at 2.5Gbps, but they're fairly ad-hoc choices. WAN-in-Lab can alternatively be cajoled into using buffer sizes of any power of 2 from 128 to 8192 packets. An obvious buffer size to set is some multiple of the BDP, but that is not well defined if flows have different RTTs. Setting the buffer too large will mask the effects of RTT unfairness; for example, setting the buffer to be the size of the maximum BDP would mean that all RTTs are within a factor of 2, even if the actual path delays differ by a factor of 10. Also, Cisco buffer sizes seem to be specified in numbers of packets not bytes, and I believe Dummynet has an option to do that too. This makes a "BDP-sized " buffer hard to define with bidirectional traffic, since the number of packets depends on the fraction of ACKs vs full-sized packets. Given all that, can someone suggest suitable buffer sizes for this core set of tests? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: jheffner at psc.edu (John Heffner) Date: Tue, 25 Sep 2007 17:53:45 -0400 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: References: Message-ID: <46F98369.8030605@psc.edu> Lachlan Andrew wrote: > Greetings all, > > Another question on the list is buffer sizes. > > I'd obviously like to standardize on the buffer sizes that WAN-in-Lab > supports "natively", namely 128 packets at 1Gbps and 16384 packet at > 2.5Gbps, but they're fairly ad-hoc choices. WAN-in-Lab can > alternatively be cajoled into using buffer sizes of any power of 2 > from 128 to 8192 packets. > > An obvious buffer size to set is some multiple of the BDP, but that is > not well defined if flows have different RTTs. Setting the buffer too > large will mask the effects of RTT unfairness; for example, setting > the buffer to be the size of the maximum BDP would mean that all RTTs > are within a factor of 2, even if the actual path delays differ by a > factor of 10. > > Also, Cisco buffer sizes seem to be specified in numbers of packets > not bytes, and I believe Dummynet has an option to do that too. This > makes a "BDP-sized " buffer hard to define with bidirectional traffic, > since the number of packets depends on the fraction of ACKs vs > full-sized packets. > > Given all that, can someone suggest suitable buffer sizes for this > core set of tests? Given the relatively wide range of buffer sizes in hardware out there at any given speed, and the significant effect buffer size can have on congestion control, it seems like this should be an extra dimension rather than fixed per link type. Maybe select a couple different sizes based on a set of drain times? Say, {1 ms, 10 ms, 100 ms, 1 sec}? Convert this to bytes or packets by dividing by the link byte or MTU packet rate. -John From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 25 Sep 2007 15:12:09 -0700 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: <46F98369.8030605@psc.edu> References: <46F98369.8030605@psc.edu> Message-ID: <52377da556dd59343ec870df1710d63a@mac.com> > Given the relatively wide range of buffer sizes in hardware out there > at > any given speed, and the significant effect buffer size can have on > congestion control, it seems like this should be an extra dimension > rather than fixed per link type. Yep, that makes sense to me. E.g., one test scenario could keep all other parameters fixed, vary the buffer size (or the average queue size for the AQM mechanism) at the congested link, and explore performance as a function of buffer size. This does not imply a *goal* that all congestion control mechanisms perform equally well for all buffer sizes; just that if a particular congestion control mechanism performs extremely poorly with small buffers, or gets much more or much less of its "share" of the bandwidth with very large buffers, it would be good for a set of "current best practice" simulation scenarios to detect this. As a point of information. - Sally http://www.icir.org/floyd/ From: lars.eggert at nokia.com (Lars Eggert) Date: Wed, 26 Sep 2007 10:31:54 +0300 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: <46DED185.4040408@mail.eecis.udel.edu> Message-ID: <20A4D241-511A-4F5C-BA72-2E1B96F25689@nokia.com> Hi, it might make sense to add a one or two test cases that include a GSM or UMTS access link. Both have the interesting characteristic that the link delay is pretty high (~200ms for UMTS, > 1 sec for GSM) and worse, the delay can jump around quite a bit (i.e,, double or more) on very short timescales. This might prove challenging for TCPs that use path delay for congestion estimation. That said, I can't point you at a good model for such links. But maybe someone else can? Lars -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2446 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20070926/373ae8d5/attachment.bin From: sallyfloyd at mac.com (Sally Floyd) Date: Thu, 27 Sep 2007 12:14:21 -0700 Subject: [Tmrg] Comments on in draft-irtf-tmrg-metrics-10.txt In-Reply-To: References: Message-ID: <075efda316804da2ff757dc866dbfd07@mac.com> Lachlan - > Here are some further comments on draft-irtf-tmrg-metrics-10.txt. Many thanks, I made all of the changes that you suggested. (This feedback was just in time, as the document has just finished IRTF review, and is now being forwarded to the next stage in the process...) - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sat, 29 Sep 2007 17:51:33 -0700 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: <46F98369.8030605@psc.edu> References: <46F98369.8030605@psc.edu> Message-ID: Greetings John, On 25/09/2007, John Heffner wrote: > Lachlan Andrew wrote: > > > > suitable buffer sizes for this core set of tests? > > > Given the relatively wide range of buffer sizes in hardware out there at > any given speed, and the significant effect buffer size can have on > congestion control, it seems like this should be an extra dimension > rather than fixed per link type. Absolutely. As Sally suggests, it would be good to have some tests specifically along this dimension. That leaves open what value(s?) to use for this parameter for tests along other dimensions. > Maybe select a couple different sizes based on a set of drain times? > Say, {1 ms, 10 ms, 100 ms, 1 sec}? Convert this to bytes or packets by > dividing by the link byte or MTU packet rate. Sounds good. As I pointed out, converting delay to packets isn't as simple as dividing by MTU packet rate, because we have reverse ACKs which aren't full sized. If we have an equal number of forward and backward flows every second ACK is delayed, we need a buffer of roughly (3/2)(drain_time/MTU_rate) packets. Without delayed ACKs, it becomes rougly 2(drain_time/MTU_rate). In my limited experience, buffer sizes are typically powers of 2. I propose that all "core" tests use buffers with a limit (in packets) equal to a power of 2. Do others agree? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: jheffner at psc.edu (John Heffner) Date: Mon, 01 Oct 2007 11:53:59 -0400 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: References: <46F98369.8030605@psc.edu> Message-ID: <47011817.6050504@psc.edu> Lachlan Andrew wrote: > Greetings John, > > On 25/09/2007, John Heffner wrote: >> Lachlan Andrew wrote: >>> suitable buffer sizes for this core set of tests? >> >> Given the relatively wide range of buffer sizes in hardware out there at >> any given speed, and the significant effect buffer size can have on >> congestion control, it seems like this should be an extra dimension >> rather than fixed per link type. > > Absolutely. As Sally suggests, it would be good to have some tests > specifically along this dimension. That leaves open what value(s?) to > use for this parameter for tests along other dimensions. > >> Maybe select a couple different sizes based on a set of drain times? >> Say, {1 ms, 10 ms, 100 ms, 1 sec}? Convert this to bytes or packets by >> dividing by the link byte or MTU packet rate. > > Sounds good. As I pointed out, converting delay to packets isn't as > simple as dividing by MTU packet rate, because we have reverse ACKs > which aren't full sized. If we have an equal number of forward and > backward flows every second ACK is delayed, we need a buffer of > roughly (3/2)(drain_time/MTU_rate) packets. Without delayed ACKs, > it becomes rougly 2(drain_time/MTU_rate). What I was talking about was for a byte buffer, use (drain_time/byte_rate); for a packet buffer, (drain_time/packet_rate). Some devices use packet buffers, and others use byte buffers. I'm not sure there's a significant majority either way. As you say, their behavior is definitely different when not all packets are the same size (mixed-MTU or bi-directional traffic). I'm not sure if this effect is strong enough that you need to do both, but it couldn't hurt. I'll admit I haven't been following this group closely enough to know exactly what the objective is for the "core" set of tests. > In my limited experience, buffer sizes are typically powers of 2. I > propose that all "core" tests use buffers with a limit (in packets) > equal to a power of 2. Do others agree? Hm, that's not really my experience. Linux, for instance, uses a default of 100 or 1000 packets with most ethernet drivers. Especially when talking about packets rather than bytes, a power of two seems kind of arbitrary. I think a lot of older Fast/Gig switches used 64k buffers, or 42-43 packets at 1500 bytes. OTOH, I don't think there's anything bad about using a power of two if it's convenient. -John From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 1 Oct 2007 13:30:52 -0700 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: References: <46F98369.8030605@psc.edu> Message-ID: <65b35fd1d6c8959209fe942f1c5ddece@mac.com> > In my limited experience, buffer sizes are typically powers of 2. I > propose that all "core" tests use buffers with a limit (in packets) > equal to a power of 2. Do others agree? It makes sense to me to limit buffer sizes to powers of two. I also think that the purpose of a "core" set of tests is to explore how congestion control mechanisms perform under a range of conditions (including boundary conditions of various kinds). That is, in my view, the point of a core set of tests is not "whoever gets the highest score on these tests wins", but in contrast, a set of tests that hopefully will shed some light on the strong and weak points (or the tradeoffs in design) for the congestion control mechanism under test. With some added guide of "we believe that these tests represent fairly realistic scenarios", and "these other tests represent fairly unrealistic scenarios, but are included to test boundary conditions of various kinds, or to test conjectured conditions of the future." With that view in mine, I assume that a core set of tests will include tests with buffers in packets and with buffers in bytes, and with both Drop-Tail and AQM. And I assume that a core set of tests will include (but not necessarily be limited to) a realistic mix of packet sizes for data packets on the congested link. My memory is that for some links that have been measured, a realistic mix means 90% of data packets with 1500 bytes, with a mix for the remaining data packets of 500 bytes, 4000 bytes, 200 bytes, and the like. I also assume that a core set of tests is a set that an average researcher can run on their computer over the weekend (or in only a few days). So of course, many of the tests will have to be short samples of the possible space. - Sally http://www.icir.org/floyd/ From: jhealy at swin.edu.au (James Healy) Date: Tue, 02 Oct 2007 17:22:10 +1000 Subject: [Tmrg] Logging active TCP details in FreeBSD 5, 6 and 7 In-Reply-To: References: <46E0F2BE.8090405@swin.edu.au> Message-ID: <4701F1A2.50807@swin.edu.au> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Lachlan, Lachlan Andrew wrote: > Would it be possible to give it an interface like Web100? We're > currently building our benchmarking suite to use (a slightly > optimized) Web100 to monitor system internals, and it would be great > to be able to use SIFTR without too much modification of our suite. Unfortunately, we don't currently have the resources to expand SIFTR to include a web100 compatible interface. We can certainly see the merit in the idea though, and would be happy to liase with anybody that would like to look into it. regards, James Healy Research Assistant http://caia.swin.edu.au -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (FreeBSD) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHAfGi4oawkrbYo/kRAphxAJ4usqbK3vdLNU593Z03CPD41hqKxgCgtrLo I2tJ3waiVZixie/shJKsxFk= =fD0N -----END PGP SIGNATURE----- Swinburne University of Technology CRICOS Provider Code: 00111D NOTICE This e-mail and any attachments are confidential and intended only for the use of the addressee. They may contain information that is privileged or protected by copyright. If you are not the intended recipient, any dissemination, distribution, printing, copying or use is strictly prohibited. The University does not warrant that this e-mail and any attachments are secure and there is also a risk that it may be corrupted in transmission. It is your responsibility to check any attachments for viruses or defects before opening them. If you have received this transmission in error, please contact us on +61 3 9214 8000 and delete it immediately from your system. We do not accept liability in connection with computer virus, data corruption, delay, interruption, unauthorised access or unauthorised amendment. Please consider the environment before printing this email. From: jheffner at psc.edu (John Heffner) Date: Tue, 02 Oct 2007 14:18:51 -0400 Subject: [Tmrg] Logging active TCP details in FreeBSD 5, 6 and 7 In-Reply-To: <4701F1A2.50807@swin.edu.au> References: <46E0F2BE.8090405@swin.edu.au> <4701F1A2.50807@swin.edu.au> Message-ID: <47028B8B.4080907@psc.edu> James Healy wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi Lachlan, > > Lachlan Andrew wrote: >> Would it be possible to give it an interface like Web100? We're >> currently building our benchmarking suite to use (a slightly >> optimized) Web100 to monitor system internals, and it would be great >> to be able to use SIFTR without too much modification of our suite. > > Unfortunately, we don't currently have the resources to expand SIFTR to > include a web100 compatible interface. We can certainly see the merit in > the idea though, and would be happy to liase with anybody that would > like to look into it. SIFTR and Web100 (the TCP ESTATS MIB) are very different ways of instrumenting TCP. I'm not sure it would even make sense to try to use the same interface. -John From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 2 Oct 2007 14:02:05 -0700 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: <65b35fd1d6c8959209fe942f1c5ddece@mac.com> References: <46F98369.8030605@psc.edu> <65b35fd1d6c8959209fe942f1c5ddece@mac.com> Message-ID: Greetings Sally, Thanks for clearly stating your vision of the core tests. My tentative goals are below. On 01/10/2007, Sally Floyd wrote: > > I also think that the purpose of a "core" set of tests is to > explore how congestion control mechanisms perform under a > range of conditions (including boundary conditions of various > kinds). That is, in my view, the point of a core set of tests is > not "whoever gets the highest score on these tests wins", but > in contrast, a set of tests that hopefully will shed some light > on the strong and weak points (or the tradeoffs in design) > for the congestion control mechanism under test. Agreed. > With > some added guide of "we believe that these tests represent > fairly realistic scenarios", and "these other tests represent fairly > unrealistic scenarios, but are included to test boundary > conditions of various kinds, or to test conjectured conditions > of the future." > > With that view in mind, I assume that a core set of tests will > include tests with buffers in packets and with buffers in bytes, > and with both Drop-Tail and AQM. It would be good to have such tests, but that is more ambitious/comprehensive than I had in mind for the initial November round table. My aim was more "white-box" than "black-box": trying to find some simple tests which can be performed on a range of physical and simulated testbeds. If (big if!) physical routers predominantly have buffers in packets, then I'd prefer to start with a subset of the core tests which only use buffers in packets. Motivated by past debates over different labs' tests, I was also more interested in repeatability than realism. If we get different results using simulation from dummynet or different results using dummynet from real WAN testbeds, it would be ideal if the results are "clean" enough to find out what causes the difference. Than means many of the tests may lack important attributes like "web" cross traffic -- although of course there must also be enough tests with cross traffic to see how the algorithm will perform in practice. Once one or two tests have been defined precisely enough to be repeatable by different labs, it would of course be good to extend to a "wider core", like the one you describe. > And I assume that a core set of tests will include (but not > necessarily be limited to) a realistic mix of packet sizes for > data packets on the congested link. My memory is that for > some links that have been measured, a realistic mix means > 90% of data packets with 1500 bytes, with a mix for the > remaining data packets of 500 bytes, 4000 bytes, 200 bytes, > and the like. Yes, some tests like that would be good. However, since the MTU is a property of an interface, it increases the number of senders required to run the experiments. Also, any experiments with over 1500-byte packets can't be run on GbE hardware. Since the TCP algorithms themselves determine the percentages of traffic, we should specify the traffic in terms of the number of flows with each MTU, rather than the amount of traffic. How about specifying 90% of flows use 1500-byte, and 10% of flows use 536-byte? Limiting to only two distinct MTUs also reduces the number of computers needed. It can be done with only 8 NICs (2 senders, 2 receivers and 4 on a Dummynet). > I also assume that a core set of tests is a set that an average > researcher can run on their computer over the weekend > (or in only a few days). So of course, many of the tests will > have to be short samples of the possible space. Yes, short samples will be important. Again, at this stage I'd like to design the tests for hardware implementation, and then check that they're also OK for simulation, rather than the other way around. We can look into exploiting the extra flexibility of simulation in "Round Table II". Of course, the actual focus will depend on what the attendees are interested in. Cesar has put together a tentative agenda at . Feedback is welcome. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 2 Oct 2007 14:27:30 -0700 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: References: <46F98369.8030605@psc.edu> <65b35fd1d6c8959209fe942f1c5ddece@mac.com> Message-ID: Greetings again, On 02/10/2007, Lachlan Andrew wrote: > On 01/10/2007, Sally Floyd wrote: > > some links that have been measured, a realistic mix means > > 90% of data packets with 1500 bytes, with a mix for the > > remaining data packets of 500 bytes, 4000 bytes, 200 bytes, > > and the like. > > Since the TCP algorithms themselves determine the percentages of > traffic, we should specify the traffic in terms of the number of > flows with each MTU, rather than the amount of traffic. How about > specifying 90% of flows use 1500-byte, and 10% of flows use 536-byte? On second thoughts, TSO and iperf's blocking themselves produce a significant number of packets below the MTU. If we have 100% of flows with an MTU of 1500 and use TSO, we may automatically get 90% 1500 byte packets, and 10% smaller ones. If we rely on this artifact, we should control for it (specify iperf parameters?), and specify how to get comparable results in simulations. Having one flow with 10% of its packets small is very different from having 10% of flows with all of their packets small. Sally, I assume the study you referred to was pre-TSO, but many of the smaller packets could still have been runts from connections with larger MTUs. Thoughts? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: jheffner at psc.edu (John Heffner) Date: Wed, 03 Oct 2007 13:54:12 -0400 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: References: <46F98369.8030605@psc.edu> <65b35fd1d6c8959209fe942f1c5ddece@mac.com> Message-ID: <4703D744.1010505@psc.edu> Lachlan Andrew wrote: > Greetings again, > > On 02/10/2007, Lachlan Andrew wrote: >> On 01/10/2007, Sally Floyd wrote: >>> some links that have been measured, a realistic mix means >>> 90% of data packets with 1500 bytes, with a mix for the >>> remaining data packets of 500 bytes, 4000 bytes, 200 bytes, >>> and the like. >> Since the TCP algorithms themselves determine the percentages of >> traffic, we should specify the traffic in terms of the number of >> flows with each MTU, rather than the amount of traffic. How about >> specifying 90% of flows use 1500-byte, and 10% of flows use 536-byte? > > On second thoughts, TSO and iperf's blocking themselves produce a > significant number of packets below the MTU. If we have 100% of flows > with an MTU of 1500 and use TSO, we may automatically get 90% 1500 > byte packets, and 10% smaller ones. > > If we rely on this artifact, we should control for it (specify iperf > parameters?), and specify how to get comparable results in > simulations. > > Having one flow with 10% of its packets small is very different from > having 10% of flows with all of their packets small. Sally, I assume > the study you referred to was pre-TSO, but many of the smaller packets > could still have been runts from connections with larger MTUs. > > Thoughts? TSO traffic will often give you only MSS-sized segments until the window gets large enough that you start sending full 64k packets down to the driver. (If I remember correctly how it works now -- there have been so many changes..) One thing to try might be using setsockopt(TCP_MAXSEG) on some flows. That's probably easier than changing interface MTUs. -John From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 3 Oct 2007 11:04:30 -0700 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: <4703D744.1010505@psc.edu> References: <46F98369.8030605@psc.edu> <65b35fd1d6c8959209fe942f1c5ddece@mac.com> <4703D744.1010505@psc.edu> Message-ID: On 03/10/2007, John Heffner wrote: > > TSO traffic will often give you only MSS-sized segments until the window > gets large enough that you start sending full 64k packets down to the > driver. (If I remember correctly how it works now -- there have been so > many changes..) OK. I remember seeing fragments coming out, but as you say, that might have been a transient state of the code. > One thing to try might be using setsockopt(TCP_MAXSEG) on some flows. > That's probably easier than changing interface MTUs. Thanks! That will certainly make things more flexible. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: lstewart at room52.net (Lawrence Stewart) Date: Wed, 10 Oct 2007 11:51:53 +1000 Subject: [Tmrg] Software for FreeBSD TCP R&D: SIFTR v1.1.4 and DPD v1.0 released Message-ID: <470C3039.9030302@room52.net> Hi All, Further to Grenville's recent email regarding SIFTR, we just wanted to give you a quick heads up regarding the availability of a new SIFTR (Statistical Information for TCP Research) version and the debut release of DPD (Deterministic Packet Discard). SIFTR v1.1.4 addresses a couple of issues, one of which is applicable to users of SIFTR in FreeBSD 7-CURRENT. Read the changelog and readme for more information. DPD is a new FreeBSD kernel module we developed to further aid us in our ongoing TCP research. It allows for the deterministic dropping of TCP packets from within the FreeBSD kernel via a simple sysctl interface. This is particularly useful for anyone that is interested in observing TCP reacting to packet loss events (e.g. congestion control researchers). Being able to drop the same packet(s) across multiple tests allows for simpler comparisons of TCP behaviour. We've found it particularly useful in evaluating and observing the behaviour of different congestion control mechanisms, and hope it may be of use to others out there. Please refer to the DPD readme for more in-depth information. The software and documentation is freely available under a BSD licence from: http://caia.swin.edu.au/urp/newtcp/tools.html We would be very happy to hear from anyone regarding bugs and suggestions as well. Cheers, Lawrence http://caia.swin.edu.au From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sat, 27 Oct 2007 13:00:01 -0700 Subject: [Tmrg] convergence time Message-ID: Greetings all, With the TCP evaluation round table just under two weeks away, let's keep the discussions moving. It would be great for people to start threads on whatever issues they think we should agree on. Even if you're not coming in person or through VRVS, feel free to start a thread. How should we measure the responsiveness of a TCP algorithm? One way is to measure "convergence time" as the time after a step change in traffic until rate is within x% of its final value. If we agree on that, we need to decide: - What should x be? I think it is not very critical, as long as we agree on it. If it is too low (like within 10%), it becomes too sensitive to how we measure rates. If it is too high (like within 50%), it doesn't capture whole convergence process. Is 30% OK? - How do we determine the "final value" of rate? If everything is symmetric, we could just take is at the "fair" (equal) rates, but I think we should define it independently for fairness, for those cases in which flows never reach equal rates. For experiments with just one step change followed by a long period of "steady state" (possibly with cross traffic coming and going), we can just average over a period "a long time" after the event. How long should that be? It could be something like "when the rate of change has dropped to 5% of the original rate of change" or some such. - What timescale should we average the "current" rate over? Rates vary due to AIMD and cross traffic, as well as the convergence process. For loss-based protocols, including hybrid loss+delay, we could base it on the rate (or window) just before or just after a loss event. For non-loss-based protocols, we could simply average over one RTT. Thoughts? - The convergence time depends on setting, such as the number of competing flows, and the RTT. Should we specify a few settings specifically for determining "the convergence speed of the algorithm", or should we just say how to measure convergence time for each experiment? - The rise time of a single flow to an empty systems is not very interesting, because it many measures the impact of slow start. Is it interesting to consider the response of one existing flow to one new flow? An alternative is to consider time to settle when a flow *departs*, although that mainly measure the aggressiveness of the protocol, rather than its responsiveness. - We need to make these repeatable. That is particularly hard with cross traffic. Should we specify a minimum number of runs to average over? If so, there is a tradeoff between accuracy and time to complete the tests. Would averaging over 5 tests be enough? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 From: weixl at caltech.edu (Xiaoliang (David) Wei) Date: Sun, 28 Oct 2007 23:41:24 -0700 (PDT) Subject: [Tmrg] convergence time In-Reply-To: References: Message-ID: Hi Lachlan, Thanks for the good summary. It seems to me that part of the problems occur because the concept of convergence really depends on degree of stability of the protocols and timescale of the measurement. These are inherent if we measure "current rate". Another option, to eliminate the dependency to stability and timescale, is that we don't study the convergence of current rate. Instead, we study the convergence of the aggregate average rate. That is, if the instanenous rate of a flow at time t is x(t), we define the aggregate average rate of the flow at time t to be X(t) = 1/t * sum u=0->t x(u). ("sum" can be "integrate" if the time is continous). Then we study the convergence of the curve X(t) to the "final value". This process might be easier as: 1. X(t) is easier to measure because we can just look at the amount we have transfered from time 0 to time t; 2. X(t) converges even x(t) has a limit-cycle oscillation, so it is less sensitive to stability 3. If x(t) converges fast, X(t) converges fast too. We can still compare the convergence with X(t) 4. X(t) does have meaning in user-experience. It measures how long the users have to participate in the network to get to the desired rate. -David On Sat, 27 Oct 2007, Lachlan Andrew wrote: > Greetings all, > > With the TCP evaluation round table just under two weeks away, let's > keep the discussions moving. It would be great for people to start > threads on whatever issues they think we should agree on. Even if > you're not coming in person or through VRVS, feel free to start a > thread. > > > How should we measure the responsiveness of a TCP algorithm? One way > is to measure "convergence time" as the time after a step change in > traffic until rate is within x% of its final value. > > If we agree on that, we need to decide: > > - What should x be? I think it is not very critical, as long as we > agree on it. If it is too low (like within 10%), it becomes too > sensitive to how we measure rates. If it is too high (like within > 50%), it doesn't capture whole convergence process. Is 30% OK? > > - How do we determine the "final value" of rate? If everything is > symmetric, we could just take is at the "fair" (equal) rates, but I > think we should define it independently for fairness, for those cases > in which flows never reach equal rates. For experiments with just one > step change followed by a long period of "steady state" (possibly with > cross traffic coming and going), we can just average over a period "a > long time" after the event. How long should that be? It could be > something like "when the rate of change has dropped to 5% of the > original rate of change" or some such. > > - What timescale should we average the "current" rate over? Rates > vary due to AIMD and cross traffic, as well as the convergence > process. For loss-based protocols, including hybrid loss+delay, we > could base it on the rate (or window) just before or just after a loss > event. For non-loss-based protocols, we could simply average over one > RTT. Thoughts? > > - The convergence time depends on setting, such as the number of > competing flows, and the RTT. Should we specify a few settings > specifically for determining "the convergence speed of the algorithm", > or should we just say how to measure convergence time for each > experiment? > > - The rise time of a single flow to an empty systems is not very > interesting, because it many measures the impact of slow start. Is it > interesting to consider the response of one existing flow to one new > flow? An alternative is to consider time to settle when a flow > *departs*, although that mainly measure the aggressiveness of the > protocol, rather than its responsiveness. > > - We need to make these repeatable. That is particularly hard with > cross traffic. Should we specify a minimum number of runs to average > over? If so, there is a tradeoff between accuracy and time to > complete the tests. Would averaging over 5 tests be enough? > > Cheers, > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Phone: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > From: andrea.baiocchi at uniroma1.it (Andrea Baiocchi) Date: Tue, 30 Oct 2007 10:39:30 +0100 Subject: [Tmrg] convergence time In-Reply-To: References: Message-ID: Dear Lachlan, dear all, I am following this mailing list with great interest, especially about raound tabel and TCP measurements/evaluation issues. Many thanks to Lachlan for his most appreciable initiative. I try to contribute. At 13:00 -0700 27-10-2007, Lachlan Andrew wrote: >How should we measure the responsiveness of a TCP algorithm? can it be sensible to look at responsiveness from a point of view closer to the user, by measuring the time required to deliver a given amount of data Bklg as a function of the value of Bklg? This is a curve not a single value, but useful indication could be extracted from there (i.e. asymptotic growth rate). >- The rise time of a single flow to an empty systems is not very >interesting, because it many measures the impact of slow start. Is it >interesting to consider the response of one existing flow to one new >flow? An alternative is to consider time to settle when a flow >*departs*, although that mainly measure the aggressiveness of the >protocol, rather than its responsiveness. My proposal above suffers from slow start dependence. It could be "generalized" (albeit also complicated) by considering the amount of time required to deliver one INCREMENT of Bklg, say DeltaB, after an amount Bklg0 has already been delivered. As an example, given Bklg0, time required to deliver further data DeltaB=alpha*Bklg0, with alpha=0.1. Parameters to choose for this measurement are in general Bklg0 and alpha. This last measurement could also be normalized to the overall time required to deliver Bklg0 amount of data. for both measurements it would clerayl be key to precisely define the scenario (corss traffic, link/topology changes, whatever stresses "responsiveness" of TCP). Thank you for you attention. Best regards, Andrea Baiocchi -- ********************************************************* Andrea Baiocchi, PhD INFOCOM Dept. - University of Roma "La Sapienza" Via Eudossiana 18 - 00184 Roma (Italy) E-mail: andrea.baiocchi at uniroma1.it Phone +39 0644585654 Fax: +39 064744481 ********************************************************** From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 31 Oct 2007 09:18:13 -0800 Subject: [Tmrg] convergence time In-Reply-To: References: Message-ID: Greetings David, On 28/10/2007, Xiaoliang (David) Wei wrote: > Another option, to eliminate the dependency to stability and > timescale, is that we don't study the convergence of current rate. > Instead, we study the convergence of the aggregate average rate. That is, > if the instanenous rate of a flow at time t is x(t), we define the > aggregate average rate of the flow at time t to be > X(t) = 1/t * sum u=0->t x(u). > ("sum" can be "integrate" if the time is continous). Good idea. > Then we study the convergence of the curve X(t) to the "final value". > This process might be easier as: > 1. X(t) is easier to measure because we can just look at the amount we > have transfered from time 0 to time t; > 2. X(t) converges even x(t) has a limit-cycle oscillation, so it is less > sensitive to stability > 3. If x(t) converges fast, X(t) converges fast too. We can still compare > the convergence with X(t) > 4. X(t) does have meaning in user-experience. It measures how long the > users have to participate in the network to get to the desired rate. They're all good points. The main drawback is that X(t) converges (much) more slowly, since it always gives some weight to the early rates. If we want to observe the impact of each of several newly arriving flows, we need to space them out further if we use X(t) than we do if we use x(t), or else the transients will interact. The time required to find the "final" value could already be quite long, especially in the case of Reno, which takes hours to reach equilibrium on large BDP paths. What do others think? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 31 Oct 2007 09:31:24 -0800 Subject: [Tmrg] convergence time In-Reply-To: References: Message-ID: Greetings Andrea, On 30/10/2007, Andrea Baiocchi wrote: > > At 13:00 -0700 27-10-2007, Lachlan Andrew wrote: > >How should we measure the responsiveness of a TCP algorithm? > > can it be sensible to look at responsiveness from a point of view > closer to the user, by measuring the time required to deliver a given > amount of data Bklg as a function of the value of Bklg? This is a > curve not a single value, but useful indication could be extracted > from there (i.e. asymptotic growth rate). That is a very useful metric for the "efficiency" or "aggressiveness" of the algorithm. It is essentially the "flow completion time" metric that Nandita and Nick have been promoting. When I spoke of "responsiveness", I meant "response to changes in network conditions", which is something not captured by the metric you mention. Whatever we call them, we need to measure both effects. > My proposal above suffers from slow start dependence. It could be > "generalized" (albeit also complicated) by considering the amount of > time required to deliver one INCREMENT of Bklg, say DeltaB, after an > amount Bklg0 has already been delivered. As an example, given Bklg0, > time required to deliver further data DeltaB=alpha*Bklg0, with > alpha=0.1. Parameters to choose for this measurement are in general > Bklg0 and alpha. This last measurement could also be normalized to > the overall time required to deliver Bklg0 amount of data. That metric is equivalent to the average rate over some interval. Can you think of a way to average out the effects of AIMD in that measurement? It would either need alpha to be quite large, or to be specially tuned to the time between loss events. Otherwise, for given parameters alpha and Bklg0, an algorithm could alternate between doing well and doing poorly as the RTT increases, as the interval moves from (just before a loss) to (just after a loss). > Thank you for you attention. Thank you for your input! Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 31 Oct 2007 11:16:45 -0800 Subject: [Tmrg] [Iccrg] Re: convergence time In-Reply-To: <238973.33571.qm@web51808.mail.re2.yahoo.com> References: <238973.33571.qm@web51808.mail.re2.yahoo.com> Message-ID: Greetings Dirceu, On 31/10/2007, Dirceu Cavendish wrote: > I find transient interaction effects VERY interesting to study... True. But they are also complex. My aim with this roundtable was to agree on some simple "single numbers" to make comparison between different people's experiments easier. If we measure convergence time as the time for X(t) to reach within 20% of its final value, but in the experiment, X(t) never reaches its final value, then we are left with no numeric measure of the convergence time. What information will X(t) tell us about the interactions that isn't more apparent from x(t)? The benefits David mentioned apply mainly to the case without interactions. > The bottom line is: agreeing on X(t) performance metrics instead of x(t) > does not LIMIT in any way the expressiveness of experimental results, since > we can always reduce X(t) to x(t) by using a single flow... I agree that we can get x(t) from X(t) by differentiating it (regardless of how many flows). In terms of "expressiveness", they both carry exactly the same amount of information, if we want to compare the whole functions, rather than thresholding thiem. My question is how we can make that comparison. Does anyone have any suggestions? To me, we need to agree on some sort of data reduction, and thresholding is a simple example. Thresholding X(t) seems more problematic. Of course, that problem may go away if there is a better alternative than thresholding. Thanks, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 5 Nov 2007 15:35:07 -0800 Subject: [Tmrg] flow completion times in uncongested systems In-Reply-To: References: Message-ID: <931ed46c0e4ce65fa8d7611e23060393@mac.com> > - The rise time of a single flow to an empty systems is not very > interesting, because it many measures the impact of slow start. Actually, the flow completion time in an uncongested system can be a quite interesting thing to measure, particularly if one is evaluating one of the many proposals for start-ups faster than slow-start. (One of these proposals is Quick-Start, RFC 4782; some of the others are discussed in Appendix A of RFC 4782.) I would recommend having scenarios include the case of a generally-uncongested link, as well as including cases with various levels of congestion, with metrics including per-flow transfer times, fairness, and aggregate packet drop rates. This should illustrate some of the good and potentially-bad aspects of protocols with fast start-ups. - Sally http://www.icir.org/floyd/ RFC 4782: http://www.ietf.org/rfc/rfc4782.txt From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 5 Nov 2007 15:42:54 -0800 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: Greetings all, Thanks to those who have agreed to attend the TCP evaluation round table. If anyone else would like to attend by EVO video conference, please let me know. There is still time to give feedback on the tentative schedule at I've also tentatively allocated a "discussion leader" to each session, as follows: Thursday: Lachlan Andrew Opening discussion of scope of this meeting Cesar Marcondes Benchmarking Scenario Parameters (Bandwidth, Delay, Buffer) Lars Eggert Measure of Utilization Bob Shorten Overall Responsiveness / Convergence Time Discussion Sally Floyd New algorithms Impact on Cross-Traffic Friday: Sangtae Ha Managing the Curse of Dimensionality on Core Tests Gang Wang Basic Core Scenarios Larry Dunn Basic Cross-Traffic Models Bob, will you be OK to lead the discussion by voice link, or do you think it would be better for someone present in person to do it? My current hope is that we can write up the agreement as a PFLDnet paper, with the discussion leaders writing the first draft of their respective sections. Ideally it would be good for this to progress into a TMRG draft, but I don't think we'll get there this week. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 5 Nov 2007 15:50:23 -0800 Subject: [Tmrg] [Iccrg] TCP evaluation suite round-table In-Reply-To: References: Message-ID: <3929de282a33daa94925ac99c258ce53@mac.com> Lachlan Many thanks for the "Literature Review" web pages attached to the agenda at "http://wil.cs.caltech.edu/mwiki/index.php?title=Round_table_agenda". Documents that could be added to the literature review include the following: * S. Floyd and E. Kohler, "Tools for the Evaluation of Simulation and Testbed Scenarios", internet-draft draft-irtf-tmrg-tools-04, work in progress, July 2007. - http://tools.ietf.org/html/draft-irtf-tmrg-tools-04 * S. Floyd and E. Kohler, "Internet Research Needs Better Models", Hotnets-I, October 2002. - http://www.icir.org/models/hotnetsFinal.pdf * "Internet Research Needs Better Models" web site, - http://www.icir.org/models/bettermodels.html (With web pages on topology modeling, traffic generation, and the like. * "Transport Modeling Research Group" web page, - http://www.icir.org/tmrg/ - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 5 Nov 2007 15:55:13 -0800 Subject: [Tmrg] flow completion times in uncongested systems In-Reply-To: <931ed46c0e4ce65fa8d7611e23060393@mac.com> References: <931ed46c0e4ce65fa8d7611e23060393@mac.com> Message-ID: Greetings Sally, On 05/11/2007, Sally Floyd wrote: > > - The rise time of a single flow to an empty systems is not very > > interesting, because it many measures the impact of slow start. > > Actually, the flow completion time in an uncongested system can be > a quite interesting thing to measure, particularly if one is > evaluating one of the many proposals for start-ups faster than > slow-start. (One of these proposals is Quick-Start, RFC 4782; some > of the others are discussed in Appendix A of RFC 4782.) True. I should have said that this is a very different quantity from the responsiveness of the algorithm to control the window after slow start. As you say, it is of independent interest. > I would recommend having scenarios include the case of a > generally-uncongested link, as well as including cases with various > levels of congestion, with metrics including per-flow transfer > times, fairness, and aggregate packet drop rates. This should > illustrate some of the good and potentially-bad aspects of protocols > with fast start-ups. Yes, testing a range of levels of congestion is good. My concern was that we should not just measure the performance of a post-slow-start modification by observing slow-start behaviour. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 5 Nov 2007 15:57:21 -0800 Subject: [Tmrg] [Iccrg] TCP evaluation suite round-table In-Reply-To: <3929de282a33daa94925ac99c258ce53@mac.com> References: <3929de282a33daa94925ac99c258ce53@mac.com> Message-ID: Thanks for the links, Sally. Cesar Marcondes from UCLA actually did the literature reviews up, so all credit goes to him. Cheers, Lachlan On 05/11/2007, Sally Floyd wrote: > Lachlan > > Many thanks for the "Literature Review" web pages attached to the > agenda at > "http://wil.cs.caltech.edu/mwiki/index.php?title=Round_table_agenda". > > > Documents that could be added to the literature review include > the following: > > * S. Floyd and E. Kohler, "Tools for the Evaluation of Simulation > and Testbed Scenarios", internet-draft draft-irtf-tmrg-tools-04, > work in progress, July 2007. > - http://tools.ietf.org/html/draft-irtf-tmrg-tools-04 > > * S. Floyd and E. Kohler, "Internet Research Needs Better Models", > Hotnets-I, October 2002. > - http://www.icir.org/models/hotnetsFinal.pdf > > * "Internet Research Needs Better Models" web site, > - http://www.icir.org/models/bettermodels.html > (With web pages on topology modeling, traffic generation, and the > like. > > * "Transport Modeling Research Group" web page, > - http://www.icir.org/tmrg/ > > - Sally > http://www.icir.org/floyd/ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 5 Nov 2007 16:11:06 -0800 Subject: [Tmrg] Round table: level of realism of tests? In-Reply-To: References: <46F98369.8030605@psc.edu> <65b35fd1d6c8959209fe942f1c5ddece@mac.com> Message-ID: Lachlan - My apologies for responding a month late on this! My unanswered email folder has been building up on me. ... > Motivated by past debates over different labs' tests, I was also more > interested in repeatability than realism. If we get different results > using simulation from dummynet or different results using dummynet > from real WAN testbeds, it would be ideal if the results are "clean" > enough to find out what causes the difference. Than means many of > the tests may lack important attributes like "web" cross traffic -- > although of course there must also be enough tests with cross traffic > to see how the algorithm will perform in practice. Hmmm. My own view would be that any clearly unrealistic test scenarios should be explicitly labeled as such. One of my other views (as expressed in the 2002 Hotnets paper on "Internet Research Needs Better Models") is that a reliance on unrealistic scenarios in evaluating transport protocols (e.g., scenarios with only one-way traffic, only long-lived flows, or only flows all with the same RTT) could do a serious dis-service in the design and evaluation of transport protocols. Thus, my own view would be that test scenarios that were clearly repeatable but clearly unrealistic could do more harm than good, if people were going to rely on that for evaluating transport protocols. But maybe others have a different view, who knows? - Sally http://www.icir.org/floyd/ The 2002 Hotnets paper: http://www.icir.org/models/hotnetsFinal.pdf From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 5 Nov 2007 17:09:08 -0800 Subject: [Tmrg] Round table: level of realism of tests? In-Reply-To: References: <46F98369.8030605@psc.edu> <65b35fd1d6c8959209fe942f1c5ddece@mac.com> Message-ID: Greetings Sally, On 05/11/2007, Sally Floyd wrote: > My apologies for responding a month late on this! > My unanswered email folder has been building up on me. No worries. I know you're really busy. > > Motivated by past debates over different labs' tests, I was also more > > interested in repeatability than realism. If we get different results > > using simulation from dummynet or different results using dummynet > > from real WAN testbeds, it would be ideal if the results are "clean" > > enough to find out what causes the difference. Than means many of > > the tests may lack important attributes like "web" cross traffic -- > > although of course there must also be enough tests with cross traffic > > to see how the algorithm will perform in practice. > > Hmmm. *grin* > My own view would be that any clearly unrealistic test scenarios > should be explicitly labeled as such. Good idea. We could classify tests as those "trying to understand" behaviour vs those "evaluating" behaviour. The first group could usefully have tests which would be misleading if they were in the second group. > One of my other views (as > expressed in the 2002 Hotnets paper on "Internet Research Needs > Better Models") is that a reliance on unrealistic scenarios in > evaluating transport protocols (e.g., scenarios with only one-way > traffic, only long-lived flows, or only flows all with the same > RTT) could do a serious dis-service in the design and evaluation > of transport protocols. True, that is a danger we must try to avoid. It has to be balanced against the need to hold some variables fixed while others are varied. For example, if we're looking at the impact of the number of hops on fairness, there is a case to keep RTTs equal to eliminate the effect of RTT-unfairness for that experiment. We can also do things to maximize repeatability without reducing realism, like agreeing on some deterministic (pseudo-random?) cross traffic rather than every experiment having unique cross traffic. I've always thought of this round-table as only a start, which is why I felt comfortable suggesting having too many "understanding" experiments, at the expense of "evaluating" experiments. But the table is round, so my suggestion won't determine what happens. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: molnar at tmit.bme.hu (=?ISO-8859-2?Q?S=E1ndor_Moln=E1r?=) Date: Tue, 06 Nov 2007 11:57:57 +0100 Subject: [Tmrg] suggestions on fairness metrics to the TCP evaluation round-table Message-ID: <473048B5.6050108@tmit.bme.hu> Hi All, We have recently completed a project on the fairness analysis of high speed transport protocols. One of our result is that we suggest a new performance metric (saturation time) that can be important from the dynamical aspects of interacting protocols. We have found that the short-term dynamics could have significant impacts on long-term fairness. We hope it is relevant to the topic of TCP evaluation round-table, especially to convergence time. Our results can be found in our downloadable technical report, see below. The website of our project: http://qosip.tmit.bme.hu/~sonkoly/Tcp/ The technical report can be downloaded from: http://qosip.tmit.bme.hu/~sonkoly/Tcp/files/Technical_Report.pdf I also included the abstract of the report. Abstract --------- The short-term dynamics of competing high speed TCP flows could have surprising impacts on their long-term fairness. As a result, this could have a severe impact on the co-existence and, finally, the deployment feasibility of different seemingly promising proposals for the next generation networks. However, to our best knowledge, no root-cause analysis of the observation is available. This is the major motivation of our work. The contribution of the paper is twofold. First, we present our comprehensive performance evaluation results of both inter- and intra-protocol fairness behavior of different TCP versions to get an overall view of these protocols. The analysis has revealed not only the equilibrium behavior but also the transient characteristics with the dynamic behavior. Second, we have performed a root-cause analysis to get a deeper understanding in the case of some of the promising TCP versions. This study not only fills the "black holes", the questions which remained unanswered in some cases but rather goes deeper and investigates questions which have never been asked yet. The analysis spans multiple dimensions: flow-level, packet-level, queueing and spectral analysis. Three loss-based (HighSpeed TCP, Scalable TCP and BIC TCP) approaches and the delay-based FAST are investigated in details with both dumb-bell and parking-lot topologies. Regards, Sandor From: mascolo at poliba.it (Saverio Mascolo) Date: Tue, 6 Nov 2007 12:02:32 +0100 Subject: [Tmrg] (no subject) Message-ID: <000601c82064$8f339540$723bccc1@HPSM> what do you mean by "different level of congestion"? sm On 11/6/07, Sally Floyd wrote: > - The rise time of a single flow to an empty systems is not very > interesting, because it many measures the impact of slow start. Actually, the flow completion time in an uncongested system can be a quite interesting thing to measure, particularly if one is evaluating one of the many proposals for start-ups faster than slow-start. (One of these proposals is Quick-Start, RFC 4782; some of the others are discussed in Appendix A of RFC 4782.) I would recommend having scenarios include the case of a generally-uncongested link, as well as including cases with various levels of congestion, with metrics including per-flow transfer times, fairness, and aggregate packet drop rates. This should illustrate some of the good and potentially-bad aspects of protocols with fast start-ups. - Sally http://www.icir.org/floyd/ RFC 4782: http://www.ietf.org/rfc/rfc4782.txt _______________________________________________ Tmrg-interest mailing list Tmrg-interest at ICSI.Berkeley.EDU http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20071106/4c188372/attachment-0001.html From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 6 Nov 2007 14:02:06 -0800 Subject: [Tmrg] (no subject) In-Reply-To: <000601c82064$8f339540$723bccc1@HPSM> References: <000601c82064$8f339540$723bccc1@HPSM> Message-ID: <9bfa0ca77cca000514697fe7682523a5@mac.com> Saverio - > what do you mean by "different level of congestion"? Include "uncongested" scenarios, e.g., with low levels of link utilization. And include highly "congested" scenarios, with high levels of link utilization and a range of packet dropping and/or marking rates. With the "different levels of congestion" produced by scenarios with different levels of traffic. E.g., the number of web sessions started each second, the number of long-lived flows, etc. I don't have a proposal for a single metric that captures the "level of congestion" in congested and uncongested scenarios, however. (It is not clear to me that we need one.) - Sally > On 11/6/07, Sally Floyd wrote: > >> - The rise time of a single flow to an empty systems is not very >> > interesting, because it many measures the impact of slow start. >> >> Actually, the flow completion time in an uncongested system can be >> a quite interesting thing to measure, particularly if one is >> evaluating one of the many proposals for start-ups faster than >> slow-start.??(One of these proposals is Quick-Start, RFC 4782; some >> of the others are discussed in Appendix A of RFC 4782.) >> >> I would recommend having scenarios include the case of a >> generally-uncongested link, as well as including cases with various >> levels of congestion, with metrics including per-flow transfer >> times, fairness, and aggregate packet drop rates.??This should >> illustrate some of the good and potentially-bad aspects of protocols >> with fast start-ups. >> >> - Sally >> http://www.icir.org/floyd/ >> >> RFC 4782:??http://www.ietf.org/rfc/rfc4782.txt >> >> _______________________________________________ >> Tmrg-interest mailing list >> Tmrg-interest at ICSI.Berkeley.EDU >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > - Sally http://www.icir.org/floyd/ From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 6 Nov 2007 15:21:17 -0800 Subject: [Tmrg] Round table: level of realism of tests? In-Reply-To: References: <46F98369.8030605@psc.edu> <65b35fd1d6c8959209fe942f1c5ddece@mac.com> Message-ID: >> One of my other views (as >> expressed in the 2002 Hotnets paper on "Internet Research Needs >> Better Models") is that a reliance on unrealistic scenarios in >> evaluating transport protocols (e.g., scenarios with only one-way >> traffic, only long-lived flows, or only flows all with the same >> RTT) could do a serious dis-service in the design and evaluation >> of transport protocols. > > True, that is a danger we must try to avoid. It has to be balanced > against the need to hold some variables fixed while others are varied. > For example, if we're looking at the impact of the number of hops on > fairness, there is a case to keep RTTs equal to eliminate the effect > of RTT-unfairness for that experiment. My assumption would be that if we are looking at a basic test of tradeoffs between bandwidth, delay, and packet drop rates, and we were varying bandwidth, *all* of these basic tests would have realistic scenarios, with a realistic range of RTTs for traffic on the congested link, a realistic range of packet sizes (including 40-byte packets from TCP ACK packets from reverse-path traffic), a realistic distribution of connection sizes, a realistic mix of TCP and UDP traffic, and the like. And that if we were looking at the effect of different levels of reverse-path traffic, all of the other variable would be held fixed at realistic values. That is, I am assuming that we are not creating scenarios for people to use to debug their own congestion control mechanisms. (People seem to be able to do that for themselves.) I am assuming that the *first* priority is to create scenarios to *evaluate* congestion control mechanisms. Our own, and other people's. And I think that requires realistic scenarios, for the most part. > We can also do things to maximize repeatability without reducing > realism, like agreeing on some deterministic (pseudo-random?) cross > traffic rather than every experiment having unique cross traffic. That sounds fine to me. It would certainly be a good thing if different simulators and different testbeds all gave the same *general* results for the same scenario. Certainly, within a single simulator, results should be repeatable. (E.g., in ns-2 version x, with simulation script y, and seed z for the pseudo-random number generator, all users should get the same results. And in a particular testbed, with a particular set of code, and a particular set-up, repeated experiments should get the same results.) However, I assume that there is a limit to the repeatability across different simulators or different testbeds, or between simulators and testbeds. Certainly it would be good for experiments to be done in both simulators and testbeds, and for the overall quantitative results to be the same. As discussed in the 2001 paper on "Difficulties in Simulating the Internet" ("http://www.icir.org/floyd/papers/simulate_2001.pdf"), simulators and testbeds in some cases have different roles to play. E.g., testbeds will probably be more useful for exploring interactions with the various features of commercial routers, firewalls, middleboxes, and the like. And simulators will probably be more useful to the individual research (particularly the one who does not have a testbed at their disposal) to play with a wide range of scenarios, to develop intuition, and to easily explore functionality that is not yet implemented in testbeds or in the real world. So it would be fine with me, for example, if there were some scenarios that would run on testbeds but not in simulators. Or vice versa. > I've always thought of this round-table as only a start, which is why > I felt comfortable suggesting having too many "understanding" > experiments, at the expense of "evaluating" experiments. But the > table is round, so my suggestion won't determine what happens. Great. I will be there pushing for both the "understanding" and the "evaluating" experiments to have as many realistic scenarios as possible. To avoid encouraging researchers to develop an "understanding" that doesn't have much to do with the real world... - Sally http://www.icir.org/floyd/ From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 6 Nov 2007 15:45:53 -0800 Subject: [Tmrg] Round table: Buffer sizes In-Reply-To: References: <46F98369.8030605@psc.edu> <65b35fd1d6c8959209fe942f1c5ddece@mac.com> Message-ID: <5c2659a26b4632985c2d18e1afc14ee3@mac.com> (Going back to some October 2 email. Apologies, again...) > If (big if!) physical routers predominantly have > buffers in packets, then I'd prefer to start with a subset of the > core tests which only use buffers in packets. Yep. I would *guess* that most routers can be characterized as having buffers in packets (e.g., having slots for packet headers, with the actual packet stored elsewhere). I don't know for sure, however. Recent experiments show that 26% of DSL hosts tested show a RED-style drop policy on their upstream queues. (Dischinger et al., "Characterizing Residential Broadband Networks', IMC'07, "http://www.imconf.net/imc-2007/papers/imc137.pdf".) So it *is* getting to be time for realistic scenarios to include some form of AQM, as well as Drop-Tail. ... > Motivated by past debates over different labs' tests, I was also more > interested in repeatability than realism. If we get different results > using simulation from dummynet or different results using dummynet > from real WAN testbeds, it would be ideal if the results are "clean" > enough to find out what causes the difference. Than means many of > the tests may lack important attributes like "web" cross traffic -- > although of course there must also be enough tests with cross traffic > to see how the algorithm will perform in practice. > > Once one or two tests have been defined precisely enough to be > repeatable by different labs, it would of course be good to extend to > a "wider core", like the one you describe. I would like to encourage a wider scope, with vaguely-realistic scenarios including a realistic range of connection sizes. And if two testbeds give qualitatively different results, then the troubleshooting of the differences can include simplifying the scenarios one step at a time. ... > Since the TCP algorithms themselves determine the percentages of > traffic, we should specify the traffic in terms of the number of > flows with each MTU, rather than the amount of traffic. How about > specifying 90% of flows use 1500-byte, and 10% of flows use 536-byte? Something like that sounds good to me. Using setsockopt(TCP_MAXSEG) , as suggested by John. And with reverse-path traffic providing the 40-byte ACK packets. - Sally http://www.icir.org/floyd/ From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 6 Nov 2007 15:50:28 -0800 Subject: [Tmrg] TCP evaluation suite round-table In-Reply-To: <20A4D241-511A-4F5C-BA72-2E1B96F25689@nokia.com> References: <46DED185.4040408@mail.eecis.udel.edu> <20A4D241-511A-4F5C-BA72-2E1B96F25689@nokia.com> Message-ID: <5dd6578a875f79064b20fad94680d8fe@mac.com> Lars - On Sep 26, 2007, at 12:31 AM, Lars Eggert wrote: > it might make sense to add a one or two test cases that include a GSM > or UMTS access link. I think that would be a great idea. - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 6 Nov 2007 17:22:50 -0800 Subject: [Tmrg] Tweak of schedule for round table Message-ID: Greetings all, To accommodate a request by colleagues who will be attending remotely, I'd like to swap the session on "impact on congestion control" (early afternoon on Thursday) with the one on "convergence time". The new agenda is at . If that is inconvenient for anyone (especially the session leaders, Larry and Sally, or those attending from other time zones), please let me know. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: downey at allendowney.com (Allen Downey) Date: Wed, 7 Nov 2007 10:00:30 -0500 Subject: [Tmrg] (no subject) In-Reply-To: <000601c82064$8f339540$723bccc1@HPSM> References: <000601c82064$8f339540$723bccc1@HPSM> Message-ID: <2890f9510711070700q77b0e1caoa30d0f4d19270a1b@mail.gmail.com> Hi All, I don't think I have the email that started this thread, so I might be out of line. But I wanted to suggest that there are some interesting things that happen in slow start on an uncongested system, depending on the size of the buffer at the bottleneck relative to the bandwidth-delay product. With apologies for this shameless act of self-promotion, I have a paper on this topic that you can download here: http://allendowney.com/research/tcp/downey07tcp.pdf If that's useful, let me know. If not, I'm sorry for jumping into the middle! Cheers, Allen On 11/6/07, Sally Floyd wrote: > > > > > - The rise time of a single flow to an empty systems is not very > > > interesting, because it many measures the impact of slow start. > > > > Actually, the flow completion time in an uncongested system can be > > a quite interesting thing to measure, particularly if one is > > evaluating one of the many proposals for start-ups faster than > > slow-start. (One of these proposals is Quick-Start, RFC 4782; some > > of the others are discussed in Appendix A of RFC 4782.) > > > > I would recommend having scenarios include the case of a > > generally-uncongested link, as well as including cases with various > > levels of congestion, with metrics including per-flow transfer > > times, fairness, and aggregate packet drop rates. This should > > illustrate some of the good and potentially-bad aspects of protocols > > with fast start-ups. > > > > - Sally > > http://www.icir.org/floyd/ > > > > RFC 4782: http://www.ietf.org/rfc/rfc4782.txt > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20071107/2fb9b7e3/attachment.html From: h-shimonishi at cd.jp.nec.com (Hideyuki Shimonishi) Date: Fri, 09 Nov 2007 01:07:49 +0900 Subject: [Tmrg] convergence time In-Reply-To: References: Message-ID: <5.1.1.8.2.20071109003040.053271e0@mail.jp.nec.com> Hi Lachlan, David, Cesar, and all, The meeting will begin soon. Unfortunately, I could not manage to attend the meeting... Yeah, I agree to the idea that we should look at the aggregate rate, and its convergence, in different level of congestion in realistic environment. But I think we also need to look at fairness of each flow, or distribution of per-flow throughput, as well. Even if the aggregate rate converges, I do not think this means the convergence of each flow. However, looking at the behavior of individual flows in a realistic environment can be difficult, so I think it might be a good idea to look at the distribution of per-flow average throughput. What I mean is the following: 1) Obtain per-flow throughput distributions with a variety of RTTs, hop counts, load levels, and so on. In this case, all flows are long-lived. 2) Obtain per-flow throughput distributions with the very same environment, but flows are mix of short-lived and long-lived. 3) Compare the difference of these distributions, or its COV. I guess 1) could be a measure for fairness, and firendliness also if we mix different kinds of flows, and 3) could be a measure for convergence. If a protocol has good cenvergence, the difference should be smaller. Also, it might be good to see the distribution with only small files or large files. The former reflects slow-start behavior of a protocols, and the latter reflects congestion avoidance behavior. That is why I proposed our tool (literature [7]) and published it at UCLA website. We can do the above testing easier with the tool. Does this sound more realistic measure of convergence, in addition to the ones you have proposed ? I am not sure Cesar will cover this point, but I guess this would be one of the important points we should cover. Thanks, HIDEyuki Shimonishi At 07/10/31 09:18 -0800, Lachlan Andrew wrote: >Greetings David, > >On 28/10/2007, Xiaoliang (David) Wei wrote: > > Another option, to eliminate the dependency to stability and > > timescale, is that we don't study the convergence of current rate. > > Instead, we study the convergence of the aggregate average rate. That is, > > if the instanenous rate of a flow at time t is x(t), we define the > > aggregate average rate of the flow at time t to be > > X(t) = 1/t * sum u=0->t x(u). > > ("sum" can be "integrate" if the time is continous). > >Good idea. > > > Then we study the convergence of the curve X(t) to the "final value". > > This process might be easier as: > > 1. X(t) is easier to measure because we can just look at the amount we > > have transfered from time 0 to time t; > > 2. X(t) converges even x(t) has a limit-cycle oscillation, so it is less > > sensitive to stability > > 3. If x(t) converges fast, X(t) converges fast too. We can still compare > > the convergence with X(t) > > 4. X(t) does have meaning in user-experience. It measures how long the > > users have to participate in the network to get to the desired rate. > >They're all good points. > >The main drawback is that X(t) converges (much) more slowly, since >it always gives some weight to the early rates. If we want to observe >the impact of each of several newly arriving flows, we need to space >them out further if we use X(t) than we do if we use x(t), or else >the transients will interact. > >The time required to find the "final" value could already be quite >long, especially in the case of Reno, which takes hours to reach >equilibrium on large BDP paths. > >What do others think? > >Cheers, >Lachlan > >-- >Lachlan Andrew Dept of Computer Science, Caltech >1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >http://netlab.caltech.edu/~lachlan >_______________________________________________ >Tmrg-interest mailing list >Tmrg-interest at ICSI.Berkeley.EDU >http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest ------------------------------------------------------------------- Hideyuki Shimonishi, Ph.D Assistant Manager R&D Unit / System Platforms Research Laboratories, NEC Corporation h-shimonishi at cd.jp.nec.com ------------------------------------------------------------------- From: sallyfloyd at mac.com (Sally Floyd) Date: Thu, 8 Nov 2007 11:44:41 -0800 Subject: [Tmrg] slides for Sally's discussion at the CalTach meeting Message-ID: <58714c247084b836cc9a5269e3c5fb08@mac.com> http://www.icir.org/floyd/talks/impactOfNewCC.ppt - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Thu, 8 Nov 2007 23:29:04 -0700 Subject: [Tmrg] Scenario for convergence time Message-ID: Greetings all, Here is my homework: the "convergence time" scenario. I also realised that we didn't mention a "hop count fairness" scenario, so I put a few thoughts down for that too. Test aims to determine how quickly existing flows make room for new flows. Agreed on: - Start with one flow in "equilibrium", 10% "background traffic", one flow having aborted slow start with window size 2~4 (initial CWND) - "Realistic" mix of RTTs for background traffic - One measure is time for new flow to transmit 10, 100, 1000, 10000 1500-byte packets. (Can be a *single* simulation/experiment, if we know when each byte is received.) My Proposals: - test equal RTTs, new RTT 4 times longer and 4 times shorter than existing. - For equal RTTs and protocols with a loss component: time until window of new flow after window reduction is at least as large as min window of old flow after a reduction To decide: - What statistics of background traffic? - What RTTs? 80 and 120/30? - What bandwidth? All? 100Mbps? - Should it be specified in bytes instead of packets, to make it MTU-agnostic? - Single link only? Multi-bottleneck fairness: Aim: Determine how much less bandwidth is given to a flow using multiple bottlenecks than to a flow with equal RTT using only one of those bottlenecks. - 2, 3 link parking lot - 3 link network with two two-link flows and a three-link flow (and three one-link flows?) Especially important for hybrid loss/delay - three hop ring with overlapping two-hop flows. Need fancy routing? - three link star. Automatic bi-directional traffic. Other issues: - highly skewed RTTs (May not be "typical" but important/realistic/informative special case.) Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: michael.welzl at uibk.ac.at (Michael Welzl) Date: Fri, 09 Nov 2007 08:57:00 +0100 Subject: [Tmrg] slides for Sally's discussion at the CalTach meeting In-Reply-To: <58714c247084b836cc9a5269e3c5fb08@mac.com> References: <58714c247084b836cc9a5269e3c5fb08@mac.com> Message-ID: <1194595020.3732.19.camel@pc105-c703.uibk.ac.at> Hi, These slides are great! I think that, while it's now common to include at least one or two TCP-friendliness tests based on one scenario in studies about high speed CC variants, there is still too little focus in most current work on the impact that such mechanisms have on other traffic (and therefore I'm cc'ing ICCRG :-) ). Cheers, Michael On Thu, 2007-11-08 at 11:44 -0800, Sally Floyd wrote: > http://www.icir.org/floyd/talks/impactOfNewCC.ppt > > - Sally > http://www.icir.org/floyd/ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: cesar at cs.ucla.edu (Cesar Marcondes) Date: Fri, 9 Nov 2007 00:00:33 -0800 Subject: [Tmrg] Scenario for "Impact of New Protocols on Legacy TCP NewReno" Message-ID: <88d780b40711090000l7b2d95b5k2a3b17eae477703a@mail.gmail.com> Dear Round-Table Participants, I've got the "impact of New Protocols on Legacy Congestion Control (TCP NewReno)". Here is the description in plain english along with some discussion context, agreed on, controversial points and to do. The idea is to perform tests over a dumbbell topology and compare the performance of (1) one execution of N homogeneous TCP NewReno flows, and afterwards, using the same seed, (2) one execution of a mixed TCP NewReno + New Protocol flows, where there are N/2 TCP NewReno flows. As the "New Protocol" replaces half of the NewReno flows, it aims to improve the overall utilization (by a certain amount G), but on the other hand, it could harm the throughput (by a certain amount L) of the co-existing NewReno in doing so. Controversial points: + The same seed is the warranty that the same environment will be run in the two executions and thus the only difference is the protocol itself. Round-Table Discussion: + Dr. Sally pointed out in the literature review, a "Bandwidth stolen from TCP" concept [http://www.icir.org/floyd/talks/impactOfNewCC.ppt], slide 8, as a possible metric to evaluate the impact of new protocol on TCP. Cesar complemented saying that he found other proposals similar, in nature, in his literature review. + There was an extra clarification on making the proper distinction between "measure of shareness on mixed flows scenario" by using some fairness metric AND the "measure of the impact", the actual metric described above. Agreed on: + None yet. Todo: + Couple the amount G and L in a either a single metric "L/G"?, or use a 2-D graph to represent the tradeoff of G and L? + What would be a reasonable range on which L could be reduced? Best regards, Cesar Marcondes CS/UCLA From: sangtae.ha at gmail.com (SANGTAE HA) Date: Fri, 9 Nov 2007 08:40:43 -0500 Subject: [Tmrg] Scenario for inter-RTT fairness Message-ID: <649aecc70711090540u2416404p6814efa4eb2b00e7@mail.gmail.com> Dear Round-table participants, I've got "Inter-RTT fairness scenario" in the meeting. The scenario proposed here is based on what we have been doing for RTT fairness testing. Scenario: - One flow with a fixed RTT. - The RTT of the other flow is varied. - The bottleneck buffer size is set to some percentage of BDP. - Measure throughput over a second half of a simulation. (This is what Sally suggested in other testing scenarios) - Metric can be either throughput ratio (fairness ratio) or fairness index. To agree on: - Do we need background traffic for inter-RTT fairness test? If so, 10% of background traffic with realistic mix of RTTs? - The range of RTT we are interested in. e.g., One flow has a fixed RTT of 160ms, and the other flow varies its RTT from 10ms to 160ms (10ms, 20ms, 40ms, 80ms, 160ms). - What buffer size in the bottleneck? e.g., 100ms BDP buffer size? Regards, Sangtae From: Romaric.Guillier at ens-lyon.fr (Romaric Guillier) Date: Fri, 09 Nov 2007 16:57:56 +0100 Subject: [Tmrg] Scenario "Impact of transient states on CC" Message-ID: <20071109165756.c0rxjyr73vw44wgc@tadorne.ens-lyon.fr> Hi! Here is my proposal for a scenario to test the impact of transient states on CC methods and some points that need to be discussed. Cheers Romaric Guillier -------------- next part -------------- *Scenario transient events Through this scenario, we are trying to evaluate the impact of a sudden change of congestion level on a given congestion control method. We are considering both the case where there is a sudden decrease or increase of the congestion level. This scenario is composed of three parts: the control, the uphill test and the downhill test. The control corresponds to a file transfer of a given volume of data, so as to check the necessary time to complete its transfer without perturbations. The downhill test consists in abruptly applying a given congestion level while a file transfer is occuring, the uphill test is done by starting a file transfer when there is congestion in the system, later the congestion abruptly desappears. *Parameters V, the size of the transfer Cg, the congestion level that is applied to the system *Metrics Aggressiveness = (Tuphill - Tcontrol)/ Tcontrol Responsiveness = (Tdownhill - Tcontrol)/ Tcontrol *Timeline --Control At time = 0, a transfer of size V is started At time = Tcontrol, the transfer completes --Downhill test At time = 0, a transfer of size V is started At time = Tcong, Cg is applied to the system At time = Tdownhill, the transfer completes --Uphill test At time = -1, Cg is applied to the system At time = 0, a transfer of size V is started At time = Tcong, Cg is removed from the system At time = Tuphill, the transfer completes *To be discussed: - Why not putting the uphill/downhill phase during the same transfer ? So that we don't need to ask ourselves questions about convergence and when to switch, we would still need to perform two tests to covert every possibilities - Should we remove the slow start phase ? - How to generate the congestion: *UDP, similar TCP flows (interaction with the interfairness problem), reference TCP flows (interaction with the intra-fairness problem) - Tcong is a function of V(,RTT) : time to transfer X% of V when there is no congestion , time to be well out of the slow start phase, etc.. From: sallyfloyd at mac.com (Sally Floyd) Date: Fri, 9 Nov 2007 09:01:44 -0800 Subject: [Tmrg] Scenarios for testing delay/throuoghput tradeoffs Message-ID: Here is a first draft on describing possible scenarios for testing delay/throughput tradeoffs for a particular congestion control mechanism. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Delay_throughput.txt Url: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20071109/32b5f526/attachment.txt -------------- next part -------------- - Sally http://www.icir.org/floyd/ From: lars.eggert at nokia.com (Lars Eggert) Date: Fri, 9 Nov 2007 09:10:39 -0800 Subject: [Tmrg] p2p stats Message-ID: Here's a link to the p2p stats I mentioned at the Caltech meeting: http://www.ipoque.com/userfiles/file/P2P-Survey-2006.pdf http://www.ipoque.com/media/news/pressrelease_ipoque_241006.html http://www.ipoque.com/media/news/ ipoques_2007_p2p_survey_to_be_presented_at_technology_reviews_emerging_t echnologies_conference_at_mit.html Lars -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20071109/4e3f198a/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2446 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20071109/4e3f198a/attachment-0001.bin From: mascolo at poliba.it (Saverio Mascolo) Date: Fri, 16 Nov 2007 13:52:06 +0100 Subject: [Tmrg] Scenarios for testing delay/throuoghput tradeoffs Message-ID: <006a01c8284f$886b7410$723bccc1@HPSM> The paper "performance evaluation and comparison of westwood+, Newreno and vegas tcp" we published in ACM CCR, april 04 issue contains some experience that could be considered. In particular: 1. one single forward connection and on-off-on-off reverse traffic in order to test the effect of reverses traffic (f.i. one reverse TCP of the same flavour) this scenario is to test the effect of reverse traffic that is VERY important becasue it affects the ack flow of the forward TCP connection 2. single bottleneck the sequence number versus time of different connections show how fair is the advancement of each connection. this graph can show strong unfairness and starvation 3. multi bottleneck. this case is quite complex . under high load our experience is that all TCPs performance slow down. best saverio On Nov 9, 2007 6:01 PM, Sally Floyd wrote: Here is a first draft on describing possible scenarios for testing delay/throughput tradeoffs for a particular congestion control mechanism. - Sally http://www.icir.org/floyd/ _______________________________________________ Tmrg-interest mailing list Tmrg-interest at ICSI.Berkeley.EDU http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest -- Prof. Saverio Mascolo Dipartimentio di Elettrotecnica ed Elettronica Politecnico di Bari Tel. +39 080 5963621 Fax. +39 080 5963410 email:mascolo at poliba.it http://www-dee.poliba.it/dee-w -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20071116/2edee4a8/attachment.html From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 21 Nov 2007 14:57:53 -0800 Subject: [Tmrg] Traffic generators Message-ID: Injong, Thanks for agreeing to look into recent models of file size distribution. Do you have any updates for us? Apart from the marginal distribution, it would be very interesting to know if/how the distribution depends on (a) the bottleneck capacity or (b) the bottleneck utilization. Sangtae, Have you had a chance to study the Harpoon docs yet? I'm pretty sure that it specifies file arrival times and file durations; the "sessions" come and go deterministically (like once per hour), and are meant to model daily variation in load, not "web sessions". Can you confirm that? I'm not meaning to rush either of you -- just keep the dialogue going... Thanks Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: jsommers at cs.wisc.edu (jsommers at cs.wisc.edu) Date: Wed, 21 Nov 2007 19:49:53 -0600 (CST) Subject: [Tmrg] Traffic generators In-Reply-To: References: Message-ID: <42834.24.59.255.143.1195696193.squirrel@webmail.cs.wisc.edu> Lachlan, Yes, file arrival times and file sizes are specified when generating traffic with Harpoon. Indeed, sessions are intended to mimic longer-time scale variations in traffic volume. They are *not* intended to model anything about the web, or web sessions. Joel (long-time lurker, first-time poster...) > Sangtae, > > Have you had a chance to study the Harpoon docs yet? > > I'm pretty sure that it specifies file arrival times and file > durations; the "sessions" come and go deterministically (like once per > hour), and are meant to model daily variation in load, not "web > sessions". Can you confirm that? > > > I'm not meaning to rush either of you -- just keep the dialogue going... > > Thanks > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > http://netlab.caltech.edu/~lachlan > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > From: sangtae.ha at gmail.com (SANGTAE HA) Date: Wed, 21 Nov 2007 23:29:21 -0500 Subject: [Tmrg] Traffic generators In-Reply-To: <42834.24.59.255.143.1195696193.squirrel@webmail.cs.wisc.edu> References: <42834.24.59.255.143.1195696193.squirrel@webmail.cs.wisc.edu> Message-ID: <649aecc70711212029w152f1613j4359af70b5214ba3@mail.gmail.com> Lachlan, Joel clarified the scope of Harpoon. Thank you for clarifying this, Joel. Then, somehow we need to consider the other candidate, Tmix, which we discussed at the meeting. Tmix[1] paper says that it supports source-level and session-level replaying of the trace, so it is expected to support both short-lived (HTTP and VoIP) and long-lived (FTP and P2P) flows. The connection vectors which Tmix builds from the trace can be used in NS2 and over testbed, so we can use the same traffic for both environments. But, I recall that the implementation of Tmix for Linux was not ready a year ago (only available for FreeBSD platform at that time). Also it is not clear how many machines are required to generate the traffic based on the trace, which is also important for us. If we get these answers from the authors, we are right before the selection of the traffic generator for TCP testing. I am CCing to Michele, one of the author of this paper, for the latest status of Tmix. Thanks, Sangtae [1] Tmix: A Tool for Generating Realistic TCP Application Workloads in ns-2, http://www.sigcomm.org/ccr/drupal/?q=node/50 > On Nov 21, 2007 8:49 PM, wrote: > Yes, file arrival times and file sizes are specified when generating > traffic with Harpoon. Indeed, sessions are intended to mimic longer-time > scale variations in traffic volume. They are *not* intended to model > anything about the web, or web sessions. From: rchertov at purdue.edu (Roman Chertov) Date: Fri, 23 Nov 2007 12:21:35 -0500 Subject: [Tmrg] Traffic generators In-Reply-To: <649aecc70711212029w152f1613j4359af70b5214ba3@mail.gmail.com> References: <42834.24.59.255.143.1195696193.squirrel@webmail.cs.wisc.edu> <649aecc70711212029w152f1613j4359af70b5214ba3@mail.gmail.com> Message-ID: <47470C1F.5040404@purdue.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello, I have been following the list for a while but have never posted. For my thesis I have created extensions for ns-2 to improve its emulation capabilities. The following paper describes the system that I have built. http://www.cs.purdue.edu/homes/rchertov/papers/global07.pdf Basically, I tied ns-2 and Click modular router to create a traffic generator that can utilize ns-2 traffic models and TCP agents and generate real IP traffic. The generated IP traffic can have many unique addresses. Additionally, the traffic can then be re-injected into the simulator. I have used this to create PackMIME HTTP and mixed TCP and UDP traffic workloads. Roman SANGTAE HA wrote: > Lachlan, > > Joel clarified the scope of Harpoon. Thank you for clarifying this, Joel. > Then, somehow we need to consider the other candidate, Tmix, which we > discussed at the meeting. > > Tmix[1] paper says that it supports source-level and session-level > replaying of the trace, so it is expected to support both short-lived > (HTTP and VoIP) and long-lived (FTP and P2P) flows. The connection > vectors which Tmix builds from the trace can be used in NS2 and over > testbed, so we can use the same traffic for both environments. > > But, I recall that the implementation of Tmix for Linux was not ready > a year ago (only available for FreeBSD platform at that time). > Also it is not clear how many machines are required to generate the > traffic based on the trace, which is also important for us. > > If we get these answers from the authors, we are right before the > selection of the traffic generator for TCP testing. > I am CCing to Michele, one of the author of this paper, for the latest > status of Tmix. > > Thanks, > Sangtae > > [1] Tmix: A Tool for Generating Realistic TCP Application Workloads in > ns-2, http://www.sigcomm.org/ccr/drupal/?q=node/50 > > >> On Nov 21, 2007 8:49 PM, wrote: >> Yes, file arrival times and file sizes are specified when generating >> traffic with Harpoon. Indeed, sessions are intended to mimic longer-time >> scale variations in traffic volume. They are *not* intended to model >> anything about the web, or web sessions. > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFHRwweT8ksiSCF2AYRAmsBAKCR32oz11FXgO7euCVIcn0ADOZYiwCfb8KP 2ttH+vL+mLEx1KS0DEMlxzE= =g4up -----END PGP SIGNATURE----- From: sangtae.ha at gmail.com (SANGTAE HA) Date: Fri, 23 Nov 2007 21:09:03 -0500 Subject: [Tmrg] Traffic generators In-Reply-To: <4746F7B0.80107@email.unc.edu> References: <42834.24.59.255.143.1195696193.squirrel@webmail.cs.wisc.edu> <649aecc70711212029w152f1613j4359af70b5214ba3@mail.gmail.com> <4746F7B0.80107@email.unc.edu> Message-ID: <649aecc70711231809r7436185by6a603201b547e340@mail.gmail.com> Jay, Thank you for the answer. One more question about this. Suppose that we extract N connection vectors (Ai, Bi, Ti), which are identified by their (sip, dip) or (sip, sport, dip, dport), Tmix will delegate each connection vector to one of machines in the testbed. Yes, we can distribute these vectors evenly to each machine in the testbed, or can dedicate all connection vectors to only one machine. I just want to know how many connection vectors (or aggregate throughput) can be handled by one commodity hardware (as you listed, 1or 2GHz CPU 1GB RAM server). I can see the number of connections extracted from the UNC trace is around 2500, and my simple calculation from CDFs given in the paper gives < 250Mbps for burst traffic (100Kbps per connection and 2500 connections at this time instance). >From your experience, this traffic can be handled by one or two pairs of machines? :-) I know it highly depends on the socket buffer size for each connection as well as other system specs. Sangtae > It really depends on the capability of the machines. I have used medium grade > machines, ~1GHz CPU 1GB RAM as well as 500MHz CPU with 256 MB RAM, within a lab > network (I have not used tmix in ns2). The idea with tmix is that, given the > original trace, you create a set of connection vectors for each pair of machines > in the network, knowing the target throughput you wish to generate, and the > capability and number of machines in your network. From: jaikat at email.unc.edu (Jay Aikat) Date: Fri, 23 Nov 2007 10:54:24 -0500 Subject: [Tmrg] Traffic generators In-Reply-To: <649aecc70711212029w152f1613j4359af70b5214ba3@mail.gmail.com> References: <42834.24.59.255.143.1195696193.squirrel@webmail.cs.wisc.edu> <649aecc70711212029w152f1613j4359af70b5214ba3@mail.gmail.com> Message-ID: <4746F7B0.80107@email.unc.edu> Sangtae, I use tmix on FreeBSD and wanted to clarify your question below about "how many machines are required to generate the traffic based on the trace". It really depends on the capability of the machines. I have used medium grade machines, ~1GHz CPU 1GB RAM as well as 500MHz CPU with 256 MB RAM, within a lab network (I have not used tmix in ns2). The idea with tmix is that, given the original trace, you create a set of connection vectors for each pair of machines in the network, knowing the target throughput you wish to generate, and the capability and number of machines in your network. I hope this answers that question, if not I can clarify further. I was not involved in this discussion so, I am not sure what is known and not known to this group about tmix. But I use it on FreeBSD and would be happy to clarify points. Thanks, --Jay. SANGTAE HA wrote: > Lachlan, > > Joel clarified the scope of Harpoon. Thank you for clarifying this, Joel. > Then, somehow we need to consider the other candidate, Tmix, which we > discussed at the meeting. > > Tmix[1] paper says that it supports source-level and session-level > replaying of the trace, so it is expected to support both short-lived > (HTTP and VoIP) and long-lived (FTP and P2P) flows. The connection > vectors which Tmix builds from the trace can be used in NS2 and over > testbed, so we can use the same traffic for both environments. > > But, I recall that the implementation of Tmix for Linux was not ready > a year ago (only available for FreeBSD platform at that time). > Also it is not clear how many machines are required to generate the > traffic based on the trace, which is also important for us. > > If we get these answers from the authors, we are right before the > selection of the traffic generator for TCP testing. > I am CCing to Michele, one of the author of this paper, for the latest > status of Tmix. > > Thanks, > Sangtae > > [1] Tmix: A Tool for Generating Realistic TCP Application Workloads in > ns-2, http://www.sigcomm.org/ccr/drupal/?q=node/50 > > >> On Nov 21, 2007 8:49 PM, wrote: >> Yes, file arrival times and file sizes are specified when generating >> traffic with Harpoon. Indeed, sessions are intended to mimic longer-time >> scale variations in traffic volume. They are *not* intended to model >> anything about the web, or web sessions. > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: jaikat at email.unc.edu (Jay Aikat) Date: Sat, 24 Nov 2007 07:44:33 -0500 Subject: [Tmrg] Traffic generators In-Reply-To: <649aecc70711231809r7436185by6a603201b547e340@mail.gmail.com> References: <42834.24.59.255.143.1195696193.squirrel@webmail.cs.wisc.edu> <649aecc70711212029w152f1613j4359af70b5214ba3@mail.gmail.com> <4746F7B0.80107@email.unc.edu> <649aecc70711231809r7436185by6a603201b547e340@mail.gmail.com> Message-ID: <47481CB1.8060702@email.unc.edu> Sangtae, Just fyi, I think this example in the paper is actually quite low in terms of active connections etc. that we've seen and been able to handle in the lab. I've run experiments with 30,000 active connections per second, but of course with a larger number of much slower machines. Now to specifically answer your question, from my experience, I would guess you would need 2 pairs of machines of the class you specify, for the 2500 active connections. This assumes of course highly well-tuned machines in terms of kernel limits. I doubt you'll get away with one pair - the CPU utilization in handling so many connections may get you down, although I checked my experiments and see a (1GHz, 1GB) did handle upto 2500 active connections with under 50% CPU util just fine. As you know, a LOT depends on your trace characteristics. e.g. I am looking at a recently captured UNC trace with less than 2% concurrent connections (rest being sequential connections); but these concurrent connections are carrying 26% of the bytes. This is a separate question I am curious about -- I could not join the round table discussion remotely. So, is there a document that may be released from those discussions? I'd be eager to look at that. Thanks. --Jay. SANGTAE HA wrote: > Jay, > > Thank you for the answer. One more question about this. > > Suppose that we extract N connection vectors (Ai, Bi, Ti), which are > identified by their (sip, dip) or (sip, sport, dip, dport), Tmix will > delegate each connection vector to one of machines in the testbed. > Yes, we can distribute these vectors evenly to each machine in the > testbed, or can dedicate all connection vectors to only one machine. I > just want to know how many connection vectors (or aggregate > throughput) can be handled by one commodity hardware (as you listed, > 1or 2GHz CPU 1GB RAM server). I can see the number of connections > extracted from the UNC trace is around 2500, and my simple calculation > from CDFs given in the paper gives < 250Mbps for burst traffic > (100Kbps per connection and 2500 connections at this time instance). > From your experience, this traffic can be handled by one or two pairs > of machines? :-) I know it highly depends on the socket buffer size > for each connection as well as other system specs. > > Sangtae > > >> It really depends on the capability of the machines. I have used medium grade >> machines, ~1GHz CPU 1GB RAM as well as 500MHz CPU with 256 MB RAM, within a lab >> network (I have not used tmix in ns2). The idea with tmix is that, given the >> original trace, you create a set of connection vectors for each pair of machines >> in the network, knowing the target throughput you wish to generate, and the >> capability and number of machines in your network. From: sangtae.ha at gmail.com (SANGTAE HA) Date: Sat, 24 Nov 2007 21:48:07 -0500 Subject: [Tmrg] Traffic generators In-Reply-To: <47481CB1.8060702@email.unc.edu> References: <42834.24.59.255.143.1195696193.squirrel@webmail.cs.wisc.edu> <649aecc70711212029w152f1613j4359af70b5214ba3@mail.gmail.com> <4746F7B0.80107@email.unc.edu> <649aecc70711231809r7436185by6a603201b547e340@mail.gmail.com> <47481CB1.8060702@email.unc.edu> Message-ID: <649aecc70711241848m48898a4jd24b65987c19a95b@mail.gmail.com> Yes, the results of round table discussion will be announced to TMRG. Sangtae On Nov 24, 2007 7:44 AM, Jay Aikat wrote: > > This is a separate question I am curious about -- I could not join the round > table discussion remotely. So, is there a document that may be released from > those discussions? I'd be eager to look at that. Thanks. > --Jay. > From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 25 Nov 2007 16:46:50 -0800 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: <019301c82d70$49d23880$c44c1cac@ad.research.nec.com.cn> References: <88d780b40711202345t61072e7ew43de0825a0ffca76@mail.gmail.com> <60BCCD59-CA40-4571-8E19-BFAD2889C56E@nokia.com> <88d780b40711211027g6379ee66i1b91105ac36c255b@mail.gmail.com> <88d780b40711211310q4eaa4654o33879aefd26c4718@mail.gmail.com> <20071122185937.bezjziem15ogkso8@tadorne.ens-lyon.fr> <019301c82d70$49d23880$c44c1cac@ad.research.nec.com.cn> Message-ID: Greetings Wang, On 22/11/2007, Wang gang wrote: > > I think the basic scenarios which we have agreed on is > > Topology: Dumb-Bell with three nodes at each side, > Parking-Lot with up to three bottleneck, > BW, RTT, buffer size settings > Background traffic, cross traffic distributions. > Collected metrics. > > Is that enough? >From memory, the parking-lot was not part of the "basic scenarios" -- that was a separate scenario. They were all dumbbell with three nodes at each side. The "basic scenarios" section will describe which combinations of a) RTT-distribution b) BW c) buffer size (packets? bytes?) d) AQM (RED? Droptail?) d) ratio of forward-traffic to reverse-traffic e) ratio of long-lived flows to transient flows to study. We can study all possible combinations. You, Larry and Lars have the hard job of writing a first-draft of working out how many combinations we can study (in Sally's "three days of simulation") and how we can choose the most representative scenarios. Perhaps start by listing the possible values for each, and then eliminating any combinations that are unlikely (like low BW links with less than one RTT worth of delay). On possibility would then be to choose one or two "typical" set of parameters, and have a set of tests which varies only one of the parameters from that "typical" list. Do the sums to see how many tests you end up with to work out how many "typcial" sets will be needed. What do people on the list think of that approach? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: cesar at cs.ucla.edu (Cesar Marcondes) Date: Mon, 26 Nov 2007 07:34:27 -0800 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: <88d780b40711260732u7cf95f6g6f0b5ab57c164671@mail.gmail.com> References: <88d780b40711202345t61072e7ew43de0825a0ffca76@mail.gmail.com> <60BCCD59-CA40-4571-8E19-BFAD2889C56E@nokia.com> <88d780b40711211027g6379ee66i1b91105ac36c255b@mail.gmail.com> <88d780b40711211310q4eaa4654o33879aefd26c4718@mail.gmail.com> <20071122185937.bezjziem15ogkso8@tadorne.ens-lyon.fr> <019301c82d70$49d23880$c44c1cac@ad.research.nec.com.cn> <88d780b40711260732u7cf95f6g6f0b5ab57c164671@mail.gmail.com> Message-ID: <88d780b40711260734t42c073a0k34a3105d6cc47585@mail.gmail.com> Dear Lachlan, On Nov 25, 2007 4:46 PM, Lachlan Andrew wrote: > On possibility would then be to choose one or two "typical" set of > parameters, and have a set of tests which varies only one of the > parameters from that "typical" list. Do the sums to see how many > tests you end up with to work out how many "typcial" sets will be > needed. What do people on the list think of that approach? I think the general scenario could explore a good sub-space of parameters that fit in a 3-day simulation. I believe that's the right thing to do. > Cheers, > Lachlan Cesar From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 26 Nov 2007 07:54:46 -0800 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: <88d780b40711260732u7cf95f6g6f0b5ab57c164671@mail.gmail.com> References: <88d780b40711202345t61072e7ew43de0825a0ffca76@mail.gmail.com> <60BCCD59-CA40-4571-8E19-BFAD2889C56E@nokia.com> <88d780b40711211027g6379ee66i1b91105ac36c255b@mail.gmail.com> <88d780b40711211310q4eaa4654o33879aefd26c4718@mail.gmail.com> <20071122185937.bezjziem15ogkso8@tadorne.ens-lyon.fr> <019301c82d70$49d23880$c44c1cac@ad.research.nec.com.cn> <88d780b40711260732u7cf95f6g6f0b5ab57c164671@mail.gmail.com> Message-ID: Greetings Cesar, On 26/11/2007, Cesar Marcondes wrote: > On Nov 25, 2007 4:46 PM, Lachlan Andrew wrote: > > On possibility would then be to choose one or two "typical" set of > > parameters, and have a set of tests which varies only one of the > > parameters from that "typical" list. Do the sums to see how many > > tests you end up with to work out how many "typcial" sets will be > > needed. What do people on the list think of that approach? > > I think the general scenario could explore a good sub-space of > parameters that fit in a 3-day simulation. I believe that's the right > thing to do. Yes, we should explore a good sub-set. When I said "a set of tests which varies only one of the parameters", I should have said that each test varies a *different* parameter (they don't all just vary the same parameter!). If we have three or four base "typical" scenarios and in each scenario we explore the impact of each parameter (or most parameters), I think that will allow a reasonably varied subset of scenarios to be explored. Do you agree? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: cesar at cs.ucla.edu (Cesar Marcondes) Date: Mon, 26 Nov 2007 08:02:40 -0800 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: <88d780b40711260759v3a6e5fb1p3cf88263a552f23e@mail.gmail.com> References: <88d780b40711211027g6379ee66i1b91105ac36c255b@mail.gmail.com> <88d780b40711211310q4eaa4654o33879aefd26c4718@mail.gmail.com> <20071122185937.bezjziem15ogkso8@tadorne.ens-lyon.fr> <019301c82d70$49d23880$c44c1cac@ad.research.nec.com.cn> <88d780b40711260732u7cf95f6g6f0b5ab57c164671@mail.gmail.com> <88d780b40711260759v3a6e5fb1p3cf88263a552f23e@mail.gmail.com> Message-ID: <88d780b40711260802x6865a17an9d2c7d6513b87e89@mail.gmail.com> Yes, I agree on this approach. On Nov 26, 2007 7:54 AM, Lachlan Andrew wrote: > Greetings Cesar, > > > On 26/11/2007, Cesar Marcondes wrote: > > On Nov 25, 2007 4:46 PM, Lachlan Andrew wrote: > > > On possibility would then be to choose one or two "typical" set of > > > parameters, and have a set of tests which varies only one of the > > > parameters from that "typical" list. Do the sums to see how many > > > tests you end up with to work out how many "typcial" sets will be > > > needed. What do people on the list think of that approach? > > > > I think the general scenario could explore a good sub-space of > > parameters that fit in a 3-day simulation. I believe that's the right > > thing to do. > > Yes, we should explore a good sub-set. > > When I said "a set of tests which varies only one of the parameters", > I should have said that each test varies a *different* parameter (they > don't all just vary the same parameter!). If we have three or four > base "typical" scenarios and in each scenario we explore the impact of > each parameter (or most parameters), I think that will allow a > reasonably varied subset of scenarios to be explored. Do you agree? > > > Cheers, > Lachlan From: wanggang at research.nec.com.cn (Wang gang) Date: Tue, 27 Nov 2007 09:03:40 +0800 Subject: [Tmrg] Round-table PFLDnet submission References: <88d780b40711202345t61072e7ew43de0825a0ffca76@mail.gmail.com> <60BCCD59-CA40-4571-8E19-BFAD2889C56E@nokia.com> <88d780b40711211027g6379ee66i1b91105ac36c255b@mail.gmail.com> <88d780b40711211310q4eaa4654o33879aefd26c4718@mail.gmail.com> <20071122185937.bezjziem15ogkso8@tadorne.ens-lyon.fr> <019301c82d70$49d23880$c44c1cac@ad.research.nec.com.cn> Message-ID: <02b601c83091$59b16bd0$c44c1cac@ad.research.nec.com.cn> Lachlan, I totally agree with your idea. First list some possible combinations, but then suggest a few ones those are typical and could be carried out in the first step. Cheers. ---------------------------------------- ?? / Wang Gang NEC Labs, China 010-62705180 (ext.511) ????????????????1?????A?14? wanggang at research.nec.com.cn -- CONFIDENTIAL------------------------------------------------------- ?????????,????????,???????.????! This email is confidential. Recipient(s) named above is(are) obligated to maintain secrecy and is(are) not permitted to disclose the contents of this communication to others. Thank you! ---------------------------------------------------------------------- ----- Original Message ----- From: "Lachlan Andrew" To: "Wang gang" Cc: "tmrg" Sent: Monday, November 26, 2007 8:46 AM Subject: Re: Round-table PFLDnet submission > Greetings Wang, > > On 22/11/2007, Wang gang wrote: >> >> I think the basic scenarios which we have agreed on is >> >> Topology: Dumb-Bell with three nodes at each side, >> Parking-Lot with up to three bottleneck, >> BW, RTT, buffer size settings >> Background traffic, cross traffic distributions. >> Collected metrics. >> >> Is that enough? > > From memory, the parking-lot was not part of the "basic scenarios" -- > that was a separate scenario. They were all dumbbell with three nodes > at each side. The "basic scenarios" section will describe which > combinations of > a) RTT-distribution > b) BW > c) buffer size (packets? bytes?) > d) AQM (RED? Droptail?) > d) ratio of forward-traffic to reverse-traffic > e) ratio of long-lived flows to transient flows > to study. We can study all possible combinations. > > You, Larry and Lars have the hard job of writing a first-draft of > working out how many combinations we can study (in Sally's "three days > of simulation") and how we can choose the most representative > scenarios. > > Perhaps start by listing the possible values for each, and then > eliminating any combinations that are unlikely (like low BW links with > less than one RTT worth of delay). > > On possibility would then be to choose one or two "typical" set of > parameters, and have a set of tests which varies only one of the > parameters from that "typical" list. Do the sums to see how many > tests you end up with to work out how many "typcial" sets will be > needed. What do people on the list think of that approach? > > Cheers, > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > http://netlab.caltech.edu/~lachlan > From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 26 Nov 2007 22:37:13 -0800 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: <88d780b40711260734t42c073a0k34a3105d6cc47585@mail.gmail.com> References: <88d780b40711202345t61072e7ew43de0825a0ffca76@mail.gmail.com> <60BCCD59-CA40-4571-8E19-BFAD2889C56E@nokia.com> <88d780b40711211027g6379ee66i1b91105ac36c255b@mail.gmail.com> <88d780b40711211310q4eaa4654o33879aefd26c4718@mail.gmail.com> <20071122185937.bezjziem15ogkso8@tadorne.ens-lyon.fr> <019301c82d70$49d23880$c44c1cac@ad.research.nec.com.cn> <88d780b40711260732u7cf95f6g6f0b5ab57c164671@mail.gmail.com> <88d780b40711260734t42c073a0k34a3105d6cc47585@mail.gmail.com> Message-ID: Cesar - > I think the general scenario could explore a good sub-space of > parameters that fit in a 3-day simulation. I believe that's the right > thing to do. My assumption was that the plan was to come up with a basic set of tests that could be run in simulations over two to three days, and that could be run by an experimenter in a testbed with a reasonable amount of effort, with the basic set of tests including not only the general scenario (Part A of the draft paper) but also delay/throughput tradeoffs (Part B), convergence times (Part C), transients (Part D), impact on TCP traffic (Part E), intra-protocol fairness (Part F), and multiple bottlenecks (Part G). Many of the more delailed scenarios, such as the exploration of transients, might be able to be run is a fairly moderate amount of simulation time; e.g., exploring a very small subset of the parameter space for the exploration of transients might be just fine. My suggestion for the "general scenario" would be that we choose the types of congested links to be explored (some characterized purely by bandwidth, with others characterized more broadly as "congested satellite link" or "congested data center link", as I said in my earlier email), and that for each congested link, with the dumbbell topology, we explore that subset of the parameter space that seems somewhat realistic, *and* that seems most likely to cause problems or to illustrate a new set of tradeoffs for proposed congestion control mechanisms. In particular, for each type of congested link, I would suggest that we explore a range of levels of congestion (as that is a fundamental parameter to explore for congestion control mechanisms). For *some* of the types of congested links in the general scenario, we might want to vary the range of RTTs, or the queue management parameters (buffer size, packets or bytes, AQM or Drop-Tail), or the parameter for the heavy-tailed distribution for the transfer sizes, or something else, but given the desire for a core set of simulations/experiments that can be run in a reasonable amount of time, I think we have to leave the task of exploring much of the space as an open research task for someone else, and in this test suite explicitly concentrate on those scenarios that have been shown to be problematic in the past for someone (e.g., for HSTCP, or for delay-based congestion control protocols, or for very aggressive protocols, or for very timid protocols, etc.), or that have been shown to differentiate between different congestion control mechanisms. And as new scenarios are uncovered by researchers that are somewhat realistic and that illustrate a new set of strengths or weaknesses of particular congestion control mechanisms, they can be added to the core set of scenarios. In particular, my own experience is that two or three days of simulation time is not really all that much, and that we are going to have to be rather draconian in fashioning a core set of tests that can all be run by a researcher in a few days of simulations. The basic set of tests to be run in test-beds might be able to be much larger that the set of tests to be run in simulators (particular the subset of tests that explore a high-bandwidth congested link). For the "general scenario" in Section A, it might be necessary pretty early on to propose different subsets of tests for simulations and for testbeds, and to try to prune the proposed tests for simulations down to the essential subset. Take care, - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 28 Nov 2007 12:02:51 -0800 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: <6629ccb361ac01b8abf0562fc8a996d8@mac.com> References: <88d780b40711202301x2b88638dr825ec577a91d9f31@mail.gmail.com> <6629ccb361ac01b8abf0562fc8a996d8@mac.com> Message-ID: Greetings Sally and everyone, In the description of delay/throughput tradeoff, it talks about "moderate congestion" as 1-2% packet loss with NewReno. Unless I'm mistaken, that says "windows should be about 1/sqrt(0.01)=10 packets" (to within a small factor). I'd prefer not to quantify the load that way. Consider some scenarios: 56kbit/s: 10 packets of 12000 bits > 200ms. That means that for 56k tests with inter-city RTTs (50ms), a moderate level of load would be *half* of one flow. 100Mbit/s bottleneck, 100ms path. "Moderate" congestion would be when 2000 flows each gets about 50kbit/s. To me, that is very heavy load. Indeed, however large the bottleneck bandwidth is, "moderate" congestion would be when 100ms paths give 50kbit/s per user. I'd much prefer to specify the load in terms of the offered load as a fraction of bandwidth. I propose an alternative: The "load" is the average number of flows if the traffic was served by an M/G/1 queue with an ideal processor-sharing service discipline. My reasons are: 1. This scales properly as capacity increases, and is correctly independent of RTT 2. A processor-sharing M/G/1 queue is a model of roughly what we're aiming for with a single bottleneck (equal instantaneous rates). 3. For loads like 10%, this simply corresponds to 10% of the bandwidth. 4. It reflects that, even at extreme overload, we want to consider a system whose average number of flows doesn't increase with time. Otherwise, the results would be very sensitive to duration, and we agreed that we should try to design tests which are not sensitive to the parameters. Thoughts? Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 2 Dec 2007 23:49:41 -0800 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: References: <88d780b40711202301x2b88638dr825ec577a91d9f31@mail.gmail.com> <6629ccb361ac01b8abf0562fc8a996d8@mac.com> Message-ID: Greetings all, Does silence mean people are happy with my new proposal to measure load in terms of simultaneous sessions in a processor sharing M/G/1 queue? We're aiming to have this settled within a week, so now would be a good time to comment on this or any other issues with the document (see attached .dvi). Also, I'd ask all authors to commit regularly to CVS so that we can all see the latest. Currently it looks like the RTT section is entirely empty. Sally, do you mind if I cut-and-paste the discussion of RTTs from your section into that section? Again, I'll take silence as permission :) (We can always back it out of CVS.) Cheers, Lachlan On 28/11/2007, Lachlan Andrew wrote: > Greetings Sally and everyone, > > In the description of delay/throughput tradeoff, it talks about > "moderate congestion" as 1-2% packet loss with NewReno. Unless I'm > mistaken, that says "windows should be about 1/sqrt(0.01)=10 packets" > (to within a small factor). I'd prefer not to quantify the load that > way. Consider some scenarios: > > 56kbit/s: 10 packets of 12000 bits > 200ms. That means that for 56k > tests with inter-city RTTs (50ms), a moderate level of load would be > *half* of one flow. > > 100Mbit/s bottleneck, 100ms path. "Moderate" congestion would be when > 2000 flows each gets about 50kbit/s. To me, that is very heavy load. > Indeed, however large the bottleneck bandwidth is, "moderate" > congestion would be when 100ms paths give 50kbit/s per user. > > > I'd much prefer to specify the load in terms of the offered load as a > fraction of bandwidth. > > I propose an alternative: The "load" is the average number of flows > if the traffic was served by an M/G/1 queue with an ideal > processor-sharing service discipline. > > My reasons are: > 1. This scales properly as capacity increases, and is correctly > independent of RTT > > 2. A processor-sharing M/G/1 queue is a model of roughly what we're > aiming for with a single bottleneck (equal instantaneous rates). > > 3. For loads like 10%, this simply corresponds to 10% of the bandwidth. > > 4. It reflects that, even at extreme overload, we want to consider a > system whose average number of flows doesn't increase with time. > Otherwise, the results would be very sensitive to duration, and we > agreed that we should try to design tests which are not sensitive > to the parameters. > > Thoughts? -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan -------------- next part -------------- A non-text attachment was scrubbed... Name: pfldnet2008.dvi Type: application/x-dvi Size: 30832 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20071202/64a00fe6/attachment-0001.dvi From: sangtae.ha at gmail.com (SANGTAE HA) Date: Mon, 3 Dec 2007 13:52:17 -0500 Subject: [Tmrg] Traffic Generators (Harpoon and Tmix) Message-ID: <649aecc70712031052k3d82e71ao4b9198374b37c99e@mail.gmail.com> Hi all, We have two compelling traffic generators, Tmix[1] and Harpoon[2], one of them will be used as a common traffic generator for TCP testing. Before deciding which traffic geneator we would go, I list up simple comparisons between them. Feel free to update the table. ---------------------------------------------------------------- Tmix Harpoon ---------------------------------------------------------------- TCP/UDP application-level application-level TCP TCP/UDP ---------------------------------------------------------------- Model *(a,b,t) model inter-arrival time and file size distributions ---------------------------------------------------------------- Trace tcpdump flow-tool (from routers) *manual *manual ---------------------------------------------------------------- Supported Linux Linux FreeBSD (FreeBSD) NS2 ---------------------------------------------------------------- *(a,b,t) = (request size, response size, user think time) * "manual" means it supports user-generated vectors or distribution tables Briefly, Tmix supports more platforms (NS2) while Harpoon includes an additional UDP generation. After reading the Tmix paper, it looks *(a,b,t) model can represent user-interactions better than the model based on inter-arrival and file size distributions. Welcome your comments. Sangtae [1] M. Weigle, P. Adurthi, F. Hernandez-Campos, K. Jeffay and F. D. Smith, Tmix: A Tool for Generating Realistic TCP Application Workloads in ns-2, CCR, July 2006 [2] J. Sommers and P. Barford, Self-Configuring Network Traffic Generation, IMC 2004. From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 3 Dec 2007 11:28:23 -0800 Subject: [Tmrg] Traffic Generators (Harpoon and Tmix) In-Reply-To: <649aecc70712031052k3d82e71ao4b9198374b37c99e@mail.gmail.com> References: <649aecc70712031052k3d82e71ao4b9198374b37c99e@mail.gmail.com> Message-ID: Greetings Sangtae, On 03/12/2007, SANGTAE HA wrote: > We have two compelling traffic generators, Tmix[1] and Harpoon[2], one > of them will be used as a common traffic generator for TCP testing. > Before deciding which traffic geneator we would go, I list up simple > comparisons between them. Feel free to update the table. > > ---------------------------------------------------------------- > Tmix Harpoon > ---------------------------------------------------------------- > TCP/UDP application-level application-level > TCP TCP/UDP > ---------------------------------------------------------------- > Model *(a,b,t) model inter-arrival time and > file size distributions > ---------------------------------------------------------------- > Trace tcpdump flow-tool (from routers) > *manual *manual > ---------------------------------------------------------------- > Supported Linux Linux > FreeBSD (FreeBSD) > NS2 > ---------------------------------------------------------------- > > *(a,b,t) = (request size, response size, user think time) > * "manual" means it supports user-generated vectors or distribution tables > > Briefly, Tmix supports more platforms (NS2) while Harpoon includes an > additional UDP generation. > After reading the Tmix paper, it looks *(a,b,t) model can represent > user-interactions better than the model based on inter-arrival and > file size distributions. Thanks for checking this out. I notice that Tmix aims to model non-greedy TCP connections. The "think times" are not times between user connections, but pauses within a connection. Will that make it harder for us to collect statistics? If we're measuring things like "file completion time", it is much harder to define what a "file" is if it is just part of a long-running non-greedy TCP connection. Tmix is clearly a more general model, but I personally prefer the simplicity of considering TCP sources to be greedy. It simplifies distinguishing between the effect of slow-start vs normal operation. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 4 Dec 2007 00:10:20 -0800 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: References: <88d780b40711202301x2b88638dr825ec577a91d9f31@mail.gmail.com> <6629ccb361ac01b8abf0562fc8a996d8@mac.com> Message-ID: <907c0ed0462c4a2f923764e8eb9cf32c@mac.com> Lachlan - > Does silence mean people are happy with my new proposal to measure > load in terms of simultaneous sessions in a processor sharing M/G/1 > queue? Sorry, I am at IETF in Vancouver this week, and swamped with a number of things, but I will try to get to the PFLDnet email in the next few days... ... > Currently it looks like the RTT section is entirely empty. Sally, do > you mind if I cut-and-paste the discussion of RTTs from your section > into that section? Again, I'll take silence as permission :) (We can > always back it out of CVS.) Any cutting and pasting is fine by me. - Sally http://www.icir.org/floyd/ From: cesar at cs.ucla.edu (Cesar Marcondes) Date: Tue, 4 Dec 2007 00:27:53 -0800 Subject: [Tmrg] Fwd: Round-table PFLDnet submission In-Reply-To: <88d780b40712040024i31a3590dh45217a3c19c34a2@mail.gmail.com> References: <88d780b40711202301x2b88638dr825ec577a91d9f31@mail.gmail.com> <6629ccb361ac01b8abf0562fc8a996d8@mac.com> <88d780b40712040024i31a3590dh45217a3c19c34a2@mail.gmail.com> Message-ID: <88d780b40712040027h7ba73735nad38950cb3f14b6c@mail.gmail.com> Dear Lachlan, On Dec 2, 2007 11:49 PM, Lachlan Andrew wrote: > Greetings all, > > Does silence mean people are happy with my new proposal to measure > load in terms of simultaneous sessions in a processor sharing M/G/1 > queue? Sorry, I'm a bit busy these days. I will try to comment on your load proposal a bit. 1) "The load is varied by scaling the interarrival times by a constant. We invite other researchers to test the assumption that the file-size distribution is independent of the load". It sounds reasonable to me, files exist in hard-drives, imho they shouldn't be dependent on the load unless the dynamic content size is dependent on the load. 2) Is there a straightforward algorithm to obtain the mean queue size of M/G/1 using shortest-remaining-processing-time-first? my concern here is more practical, is it necessary to code in ns-2 a M/G/1 queue size solver for the test suite? How about deriving loads based on standard reno? we set a fixed N independent non-infinite TCP streams and measure the load based on standard Reno. And this search of N process would find when the load is ~10% and so on. > We're aiming to have this settled within a week, so now would be a > good time to comment on this or any other issues with the document > (see attached .dvi). > > Also, I'd ask all authors to commit regularly to CVS so that we can > all see the latest. ok. > Currently it looks like the RTT section is entirely empty. Sally, do > you mind if I cut-and-paste the discussion of RTTs from your section > into that section? Again, I'll take silence as permission :) (We can > always back it out of CVS.) I run the scripts from http://www.icir.org/models/sims.html. In the case of figure 4, the site says ... ## Figure 4, web traffic and long-lived flows, with a range of RTTs: ./ns sims.tcl -flows 18 -web 400 -rtts 1 -title two > two.data csh sims.cmd; cp reda.eps sims2.eps; gv sims2.eps & the log with the access links RTTs w/ 18 access links is attached. > > Cheers, > Lachlan > > > On 28/11/2007, Lachlan Andrew wrote: > > Greetings Sally and everyone, > > > > In the description of delay/throughput tradeoff, it talks about > > "moderate congestion" as 1-2% packet loss with NewReno. Unless I'm > > mistaken, that says "windows should be about 1/sqrt(0.01)=10 packets" > > (to within a small factor). I'd prefer not to quantify the load that > > way. Consider some scenarios: > > > > 56kbit/s: 10 packets of 12000 bits > 200ms. That means that for 56k > > tests with inter-city RTTs (50ms), a moderate level of load would be > > *half* of one flow. > > > > 100Mbit/s bottleneck, 100ms path. "Moderate" congestion would be when > > 2000 flows each gets about 50kbit/s. To me, that is very heavy load. > > Indeed, however large the bottleneck bandwidth is, "moderate" > > congestion would be when 100ms paths give 50kbit/s per user. > > > > > > I'd much prefer to specify the load in terms of the offered load as a > > fraction of bandwidth. > > > > I propose an alternative: The "load" is the average number of flows > > if the traffic was served by an M/G/1 queue with an ideal > > processor-sharing service discipline. > > > > My reasons are: > > 1. This scales properly as capacity increases, and is correctly > > independent of RTT > > > > 2. A processor-sharing M/G/1 queue is a model of roughly what we're > > aiming for with a single bottleneck (equal instantaneous rates). > > > > 3. For loads like 10%, this simply corresponds to 10% of the bandwidth. > > > > 4. It reflects that, even at extreme overload, we want to consider a > > system whose average number of flows doesn't increase with time. > > Otherwise, the results would be very sensitive to duration, and we > > agreed that we should try to design tests which are not sensitive > > to the parameters. > > > > Thoughts? > > > -- > > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > http://netlab.caltech.edu/~lachlan > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: rtts_bettermodels_fig4.txt Url: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20071204/6fb20be4/attachment.txt From: cesar at cs.ucla.edu (Cesar Marcondes) Date: Tue, 4 Dec 2007 00:41:46 -0800 Subject: [Tmrg] Traffic Generators (Harpoon and Tmix) In-Reply-To: References: <649aecc70712031052k3d82e71ao4b9198374b37c99e@mail.gmail.com> Message-ID: <88d780b40712040041n141be25dgec24a2581f00403f@mail.gmail.com> Dear Sangtae, I agree w/ Lachlan. I think this extra complexity of application behavior forcing the pace of TCP transmissions (by waiting read/write times), just add difficulty to separate the TCP congestion control behavior alone, for the specific goals of the TCP Test Suite. Just my 2 cents, Cesar On Dec 3, 2007 11:28 AM, Lachlan Andrew wrote: > Greetings Sangtae, > > On 03/12/2007, SANGTAE HA wrote: > > We have two compelling traffic generators, Tmix[1] and Harpoon[2], one > > of them will be used as a common traffic generator for TCP testing. > > Before deciding which traffic geneator we would go, I list up simple > > comparisons between them. Feel free to update the table. > > > > ---------------------------------------------------------------- > > Tmix Harpoon > > ---------------------------------------------------------------- > > TCP/UDP application-level application-level > > TCP TCP/UDP > > ---------------------------------------------------------------- > > Model *(a,b,t) model inter-arrival time and > > file size distributions > > ---------------------------------------------------------------- > > Trace tcpdump flow-tool (from routers) > > *manual *manual > > ---------------------------------------------------------------- > > Supported Linux Linux > > FreeBSD (FreeBSD) > > NS2 > > ---------------------------------------------------------------- > > > > *(a,b,t) = (request size, response size, user think time) > > * "manual" means it supports user-generated vectors or distribution tables > > > > Briefly, Tmix supports more platforms (NS2) while Harpoon includes an > > additional UDP generation. > > After reading the Tmix paper, it looks *(a,b,t) model can represent > > user-interactions better than the model based on inter-arrival and > > file size distributions. > > Thanks for checking this out. > > I notice that Tmix aims to model non-greedy TCP connections. The > "think times" are not times between user connections, but pauses > within a connection. Will that make it harder for us to collect > statistics? If we're measuring things like "file completion time", it > is much harder to define what a "file" is if it is just part of a > long-running non-greedy TCP connection. > > Tmix is clearly a more general model, but I personally prefer the > simplicity of considering TCP sources to be greedy. It simplifies > distinguishing between the effect of slow-start vs normal operation. > > Cheers, > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > http://netlab.caltech.edu/~lachlan > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > From: mweigle at cs.odu.edu (Michele Weigle) Date: Tue, 4 Dec 2007 07:02:45 -0500 Subject: [Tmrg] Traffic Generators (Harpoon and Tmix) Message-ID: There is work currently ongoing to implement the three simulators Tmix, Harpoon, and Swing (from Amin Vahdat's group at UCSD, http://www.cs.ucsd.edu/~kvishwanath/Swing/) in ns-2, GTNetS, and the upcoming ns-3. So far, we have Tmix implemented in ns-2 and GTNetS. We're currently in the process of validating the GTNetS implementation and preparing both versions for release. In addition to this simulation-based work, collaborators at UNC will be building a framework for Linux and FreeBSD in which any of these three simulators could be used in testbed experiments. See http://nsf.gov/awardsearch/showAward.do?AwardNumber=0709081 Also, I believe that the Tmix model can be extended to support UDP traffic, but I'm not sure if that's been implemented yet. Regarding the pauses that are part of the Tmix model, these pauses are used to represent the time between complete application data units (ADUs), which are essentially files. If you were modeling HTTP connections, for example, the 'a's would be requests and 'b's would be responses. You are right that Tmix can model persistent HTTP connections, where there are pauses in a single connection. If you wanted to have a set of long-lived greedy TCP flows, you could construct connection vectors to give you such behavior. -Michele On Dec 3, 2007, at 1:52 PM, SANGTAE HA wrote: Hi all, We have two compelling traffic generators, Tmix[1] and Harpoon[2], one of them will be used as a common traffic generator for TCP testing. Before deciding which traffic geneator we would go, I list up simple comparisons between them. Feel free to update the table. ---------------------------------------------------------------- Tmix Harpoon ---------------------------------------------------------------- TCP/UDP application-level application-level TCP TCP/UDP ---------------------------------------------------------------- Model *(a,b,t) model inter-arrival time and file size distributions ---------------------------------------------------------------- Trace tcpdump flow-tool (from routers) *manual *manual ---------------------------------------------------------------- Supported Linux Linux FreeBSD (FreeBSD) NS2 ---------------------------------------------------------------- *(a,b,t) = (request size, response size, user think time) * "manual" means it supports user-generated vectors or distribution tables Briefly, Tmix supports more platforms (NS2) while Harpoon includes an additional UDP generation. After reading the Tmix paper, it looks *(a,b,t) model can represent user-interactions better than the model based on inter-arrival and file size distributions. Welcome your comments. Sangtae [1] M. Weigle, P. Adurthi, F. Hernandez-Campos, K. Jeffay and F. D. Smith, Tmix: A Tool for Generating Realistic TCP Application Workloads in ns-2, CCR, July 2006 [2] J. Sommers and P. Barford, Self-Configuring Network Traffic Generation, IMC 2004. _______________________________________________ Tmrg-interest mailing list Tmrg-interest at ICSI.Berkeley.EDU http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest -- Michele Weigle Assistant Professor Department of Computer Science Old Dominion University Norfolk, VA 23539 mweigle at cs.odu.edu http://www.cs.odu.edu/~mweigle (757) 683-6001 ext. 5050 From: ldunn at cisco.com (Lawrence D. Dunn) Date: Tue, 4 Dec 2007 15:54:19 -0600 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: References: <88d780b40711202301x2b88638dr825ec577a91d9f31@mail.gmail.com> <6629ccb361ac01b8abf0562fc8a996d8@mac.com> Message-ID: Lachlan, SIlence from me just meant that I haven't had time to ponder it sufficiently. Since I trust you quite a bit, if "enough" other weigh in, and rough consensus is declared before I have a chance to think on it, I won't gripe. ;-) Larry -- At 11:49 PM -0800 12/2/07, Lachlan Andrew wrote: >Greetings all, > >Does silence mean people are happy with my new proposal to measure >load in terms of simultaneous sessions in a processor sharing M/G/1 >queue? > >We're aiming to have this settled within a week, so now would be a >good time to comment on this or any other issues with the document >(see attached .dvi). > >Also, I'd ask all authors to commit regularly to CVS so that we can >all see the latest. > >Currently it looks like the RTT section is entirely empty. Sally, do >you mind if I cut-and-paste the discussion of RTTs from your section >into that section? Again, I'll take silence as permission :) (We can >always back it out of CVS.) > >Cheers, >Lachlan > >On 28/11/2007, Lachlan Andrew wrote: >> Greetings Sally and everyone, >> >> In the description of delay/throughput tradeoff, it talks about >> "moderate congestion" as 1-2% packet loss with NewReno. Unless I'm >> mistaken, that says "windows should be about 1/sqrt(0.01)=10 packets" >> (to within a small factor). I'd prefer not to quantify the load that >> way. Consider some scenarios: >> >> 56kbit/s: 10 packets of 12000 bits > 200ms. That means that for 56k >> tests with inter-city RTTs (50ms), a moderate level of load would be >> *half* of one flow. >> >> 100Mbit/s bottleneck, 100ms path. "Moderate" congestion would be when >> 2000 flows each gets about 50kbit/s. To me, that is very heavy load. >> Indeed, however large the bottleneck bandwidth is, "moderate" >> congestion would be when 100ms paths give 50kbit/s per user. >> >> >> I'd much prefer to specify the load in terms of the offered load as a >> fraction of bandwidth. >> >> I propose an alternative: The "load" is the average number of flows >> if the traffic was served by an M/G/1 queue with an ideal >> processor-sharing service discipline. >> >> My reasons are: >> 1. This scales properly as capacity increases, and is correctly >> independent of RTT >> >> 2. A processor-sharing M/G/1 queue is a model of roughly what we're >> aiming for with a single bottleneck (equal instantaneous rates). >> >> 3. For loads like 10%, this simply corresponds to 10% of the bandwidth. >> >> 4. It reflects that, even at extreme overload, we want to consider a >> system whose average number of flows doesn't increase with time. >> Otherwise, the results would be very sensitive to duration, and we >> agreed that we should try to design tests which are not sensitive >> to the parameters. >> >> Thoughts? > > >-- >Lachlan Andrew Dept of Computer Science, Caltech >1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >http://netlab.caltech.edu/~lachlan > >Content-Type: application/x-dvi; name=pfldnet2008.dvi >X-Attachment-Id: f_f9qpb2lb >Content-Disposition: attachment; filename=pfldnet2008.dvi > >Attachment converted: PB17.1.65GB:pfldnet2008.dvi ( / ) (002431F0) >_______________________________________________ >Tmrg-interest mailing list >Tmrg-interest at ICSI.Berkeley.EDU >http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 4 Dec 2007 14:41:59 -0800 Subject: [Tmrg] Traffic Generators (Harpoon and Tmix) In-Reply-To: References: Message-ID: <6b38e9bc749fadc81e198ff39ebfadb5@mac.com> From Michele: > Regarding the pauses that are part of the Tmix model, these pauses are > used to represent the time between complete application data units > (ADUs), which are essentially files. If you were modeling HTTP > connections, for example, the 'a's would be requests and 'b's would be > responses. You are right that Tmix can model persistent HTTP > connections, where there are pauses in a single connection. If you > wanted to have a set of long-lived greedy TCP flows, you could > construct connection vectors to give you such behavior. But if one wanted to use a realistic traffic model in one's scenarios, that would have to include non-greedy TCP traffic (e.g., HTTP 1.1 traffic, telnet traffic or other user-generated data, etc.) Non-greedy TCP traffic is not unusual in the real world, and can be a significant stressor on congestion control mechanisms that it would be a pity to ignore - e.g., TCP flows that *might* ramp up to a high sending rate, have a data-limited or idle period, and then continue with a lot of data to send again. When we are measuring file completion times, we can use greedy TCP connections to measure them, in a mix with other traffic. The heavy-tailed distribution of user wait times within TCP connections has been under discussion since 1994. E.g., "Wide-Area Traffic: The Failure of Poisson Modeling", Paxson, V. and Floyd, S., SIGCOMM 1994 (or the 1995 IEEE/ACM Transactions on Networking version). And has been included in traffic models in ns-2 for many years (e.g., in Polly Huang's traffic generator listed in "http://www.icir.org/models/trafficgenerators.html". - Sally http://www.icir.org/floyd/ From: dovrolis at cc.gatech.edu (Constantine Dovrolis) Date: Tue, 04 Dec 2007 18:23:21 -0500 Subject: [Tmrg] Traffic Generators (Harpoon and Tmix) In-Reply-To: <6b38e9bc749fadc81e198ff39ebfadb5@mac.com> References: <6b38e9bc749fadc81e198ff39ebfadb5@mac.com> Message-ID: <4755E169.1030800@cc.gatech.edu> folks, my apologies for jumping into the discussion, but 1. I want to loudly agree with Sally that we should be considering non-greedy TCP flows with heavy-tailed size distribution, and 2. we should be asking whether these non-greedy TCP flows are generated by an open-loop flow arrival process or by a closed-loop process that takes user thinking times (and perhaps limited patience) into account. A couple of related papers: http://www.cc.gatech.edu/fac/Constantinos.Dovrolis/Papers/ravi-openclosed.pdf http://www.cc.gatech.edu/fac/Constantinos.Dovrolis/Papers/pam07-ravi.pdf Constantine -------------------------------------------------------------- Constantine Dovrolis | 3346 KACB | 404-385-4205 Associate Professor | Networking and Telecommunications Group College of Computing | Georgia Institute of Technology dovrolis at cc.gatech.edu http://www.cc.gatech.edu/~dovrolis/ Sally Floyd wrote: > From Michele: >> Regarding the pauses that are part of the Tmix model, these pauses are >> used to represent the time between complete application data units >> (ADUs), which are essentially files. If you were modeling HTTP >> connections, for example, the 'a's would be requests and 'b's would be >> responses. You are right that Tmix can model persistent HTTP >> connections, where there are pauses in a single connection. If you >> wanted to have a set of long-lived greedy TCP flows, you could >> construct connection vectors to give you such behavior. > > But if one wanted to use a realistic traffic model in one's scenarios, > that would have to include non-greedy TCP traffic (e.g., HTTP 1.1 > traffic, telnet traffic or other user-generated data, etc.) Non-greedy > TCP traffic is not unusual in the real world, and can be a significant > stressor on congestion control mechanisms that it would be a pity > to ignore - e.g., TCP flows that *might* ramp up to a high sending rate, > have a data-limited or idle period, and then continue with a lot of > data to send again. > > When we are measuring file completion times, we can use > greedy TCP connections to measure them, in a mix with other traffic. > > The heavy-tailed distribution of user wait times within TCP connections > has been under discussion since 1994. E.g., "Wide-Area Traffic: The > Failure of Poisson Modeling", Paxson, V. and Floyd, S., SIGCOMM 1994 > (or the 1995 IEEE/ACM Transactions on Networking version). And has > been included in traffic models in ns-2 for many years (e.g., in Polly > Huang's traffic generator listed in > "http://www.icir.org/models/trafficgenerators.html". > > - Sally > http://www.icir.org/floyd/ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Thu, 6 Dec 2007 20:23:15 -0800 Subject: [Tmrg] Traffic Generators (Harpoon and Tmix) In-Reply-To: <4755E169.1030800@cc.gatech.edu> References: <6b38e9bc749fadc81e198ff39ebfadb5@mac.com> <4755E169.1030800@cc.gatech.edu> Message-ID: On 04/12/2007, Constantine Dovrolis wrote: > folks, my apologies for jumping into the discussion Not at all. Thanks for your input! > 1. I want to loudly agree with Sally that we should be > considering non-greedy TCP flows with heavy-tailed size > distribution, and > 2. we should be asking whether these non-greedy TCP flows > are generated by an open-loop flow arrival process or by a > closed-loop process that takes user thinking times (and > perhaps limited patience) into account. Just to clarify, in point 2, are you suggesting that there are idle/think times both within and between flows? I agree entirely that these are all important effects, which should be included in "version 2" of the test suite. I have several reason for supporting the simpler models for the initial "version 1". I'd be interested in your thoughts on each. My strongest concerns are points 2(ii) and 4. 1. We agreed at the meeting that the load would be "open loop". That allows us to specify the offered load in a protocol-independent way. If the traffic is entirely closed-loop then the load depends on the protocols, making comparisons difficult. (Being open-loop does not preclude modelling the think-time between arrivals within a session.) 2. We need to ask what cost/benefit we get from the more complex models. (i) For some of our tests, this traffic is "cross traffic" which we're not measuring. In these tests, the results of Hohn, Veitch, and Abry (e.g., "The impact of the flow arrival process in Internet traffic") suggest that structure in the flow arrival process doesn't greatly affect the packet level traffic. (ii) For cases where we're going to measure the performance of non-greedy flows, we need to define metrics for their performance which reflect the non-greediness. I don't think such measures are obvious. We can't use connection completion times, average rates, ... 3. These tests are not intended to be exhaustive. As I said before the meeting, I'd rather the meeting result in one or two clearly-defined tests than a complete first draft of a test suite where none of the tests is specified well enough to allow comparisons. 4. I'm afraid of models with too many parameters which have to be estimated. I was under the impression that many studies have found distributions of *connection* sizes, but many fewer (if any) have studied the sizes of "bursts" within a connection. Will it matter if we get the sizes wrong? Another point related to parameter estimation is that I'm worried by the approach we agreed on of assuming that the file-size distribution is independent of the load, so that the load is simply proportional to the session arrival rate. It seems likely to me that higher load occurs when there is a brief influx of longer connections (say some BitTorrent users start up), rather than a brief rise in the session arrival rate. Could this have as big an impact as the choice of whether new "bursts" start their own connections or not? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Sun, 9 Dec 2007 18:21:39 -0800 Subject: [Tmrg] Traffic Generators (Harpoon and Tmix) In-Reply-To: References: <6b38e9bc749fadc81e198ff39ebfadb5@mac.com> <4755E169.1030800@cc.gatech.edu> Message-ID: <25cf62fb7743f744e0d7d20a7524ee09@mac.com> On Dec 6, 2007, at 8:23 PM, Lachlan Andrew wrote: > On 04/12/2007, Constantine Dovrolis wrote: >> folks, my apologies for jumping into the discussion > > Not at all. Thanks for your input! > >> 1. I want to loudly agree with Sally that we should be >> considering non-greedy TCP flows with heavy-tailed size >> distribution, and >> 2. we should be asking whether these non-greedy TCP flows >> are generated by an open-loop flow arrival process or by a >> closed-loop process that takes user thinking times (and >> perhaps limited patience) into account. > > Just to clarify, in point 2, are you suggesting that there are > idle/think times both within and between flows? > > I agree entirely that these are all important effects, which should be > included in "version 2" of the test suite. I have several reason for > supporting the simpler models for the initial "version 1". I'd be > interested in your thoughts on each. My strongest concerns are points > 2(ii) and 4. > > 1. We agreed at the meeting that the load would be "open loop". That > allows us to specify the offered load in a protocol-independent way. > If the traffic is entirely closed-loop then the load depends on the > protocols, making comparisons difficult. (Being open-loop does not > preclude modelling the think-time between arrivals within a session.) Closed-loop models are just as protocol-independent as open-loop models, I would say. The overall transfer time depends on the protocol used in either case. > 2. We need to ask what cost/benefit we get from the more complex > models. > (i) For some of our tests, this traffic is "cross traffic" which we're > not measuring. In these tests, the results of Hohn, Veitch, and Abry > (e.g., "The impact of the flow arrival process in Internet traffic") > suggest that structure in the flow arrival process doesn't greatly > affect the packet level traffic. > (ii) For cases where we're going to measure the performance of > non-greedy flows, we need to define metrics for their performance > which reflect the non-greediness. I don't think such measures are > obvious. We can't use connection completion times, average rates, ... Even if I was only looking at metrics about the behavior of long-lived flows, I would prefer for the "background traffic" to have user think times within TCP connections. This is more realistic, and increases the burstiness of the aggregate traffic in a way that affects all of the competing traffic. > 3. These tests are not intended to be exhaustive. As I said before > the meeting, I'd rather the meeting result in one or two > clearly-defined tests than a complete first draft of a test suite > where none of the tests is specified well enough to allow comparisons. I think we can do a complete first draft of a test suite. But I agree that these tests are definitely not intended to be exhaustive. > 4. I'm afraid of models with too many parameters which have to be > estimated. I was under the impression that many studies have found > distributions of *connection* sizes, but many fewer (if any) have > studied the sizes of "bursts" within a connection. Will it matter if > we get the sizes wrong? We will get it even more wrong if we don't include user think times within connections. One of the good areas for future work is for researchers to say "by the way, these results are quite sensitive to parameter X", or "these results are not at all sensitive to parameter Y". It is unavoidable, I think, that we will have to learn these things as we go along. > Another point related to parameter estimation is that I'm worried by > the approach we agreed on of assuming that the file-size distribution > is independent of the load, so that the load is simply proportional to > the session arrival rate. It seems likely to me that higher load > occurs when there is a brief influx of longer connections (say some > BitTorrent users start up), rather than a brief rise in the session > arrival rate. Could this have as big an impact as the choice of > whether new "bursts" start their own connections or not? I agree that this is a key concern. There are two ways to go: (1) models where the total load requested in a user session is independent of the level of congestion: and (2) models where the total load requested in a user session is explicitly dependent on the level of congestion. I assume that the world is like (2). As far as I know, more traffic generators are based on model (1). We could make an arbitrary attempt at model (2), or we could use model (1) and explicitly ask researchers to give us model (2) for traffic generation for the future. Either one sounds ok to me. - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 9 Dec 2007 19:07:27 -0800 Subject: [Tmrg] Traffic Generators (Harpoon and Tmix) In-Reply-To: <25cf62fb7743f744e0d7d20a7524ee09@mac.com> References: <6b38e9bc749fadc81e198ff39ebfadb5@mac.com> <4755E169.1030800@cc.gatech.edu> <25cf62fb7743f744e0d7d20a7524ee09@mac.com> Message-ID: Greetings Sally and all, On 09/12/2007, Sally Floyd wrote: > On Dec 6, 2007, at 8:23 PM, Lachlan Andrew wrote: > > 1. We agreed at the meeting that the load would be "open loop". That > > allows us to specify the offered load in a protocol-independent way. > > If the traffic is entirely closed-loop then the load depends on the > > protocols, making comparisons difficult. (Being open-loop does not > > preclude modelling the think-time between arrivals within a session.) > > Closed-loop models are just as protocol-independent as open-loop models, > I would say. > The overall transfer time depends on the protocol used in either case. The transfer times depends on the protocol in both cases. However, the *total* amount of cross traffic depends on the protocol in one case but not in the other. With an open-loop model, it is meaningful to talk about "10% cross traffic", because we specify how much data arrives in what long period of time. With a purely closed-loop model, inefficient algorithms will receive less traffic, because flows arrive less often. There is AFAIK no way to specify that a closed-loop model gives x% cross traffic. > Even if I was only looking at metrics about the behavior of long-lived > flows, I would prefer for the "background traffic" to have user think times > within TCP connections. This is more realistic, and increases the burstiness > of the aggregate traffic in a way that affects all of the competing > traffic. There is certainly a good case for making the background traffic as realistic as possible, all else being equal. If Tmix does all the hard work and comes complete with representative traces, I'd be happy for us to specify that cross traffic be non-greedy. I still don't know what metrics would be meaningful if we're measuring non-greedy traffic. This will affect what we do for the cases where all traffic comes from the traffic generators (such as the throughput vs delay scenarios). > One of the good areas for future work is for researchers to say > "by the way, these results are quite sensitive to parameter X", or > "these results are not at all sensitive to parameter Y". It is > unavoidable, > I think, that we will have to learn these things as we go along. Agreed. > There are two ways to go: > (1) models where the total load requested in a user session is > independent of the level of congestion: and > (2) models where the total load requested in a user session is > explicitly dependent on the level of congestion. > > I assume that the world is like (2). As far as I know, more traffic > generators are based on model (1). We could make an arbitrary > attempt at model (2), or we could use model (1) and explicitly ask > researchers to give us model (2) for traffic generation for the future. > Either one sounds ok to me. I'd vote for using (1) and explicitly suggesting that it be modified. It would be a shame to "standardize" an arbitrary model. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 9 Dec 2007 19:38:42 -0800 Subject: [Tmrg] Round-table PFLDnet submission In-Reply-To: <753cb8cc282bced6d325993749250d3b@mac.com> References: <20071209215603.v1gsa5smh4e8soc4@tadorne.ens-lyon.fr> <753cb8cc282bced6d325993749250d3b@mac.com> Message-ID: Greetings, On 09/12/2007, Sally Floyd wrote: > > Greetings Romaric, > >> Quoting Lachlan Andrew : > > > > I've given > > reasons that I think this test should be different and *not* have the > > pseudo-random background traffic: > > 1) It adds statistical errors (different experiments will have > > different numbers of flows at the instant of interest) > > 2) It does not really increasing the realism in this case (flows are > > unlikely to arrive within the RTT or so during which the window is > > being reduced, so any flows present at the time are essentially > > "long-lived"). > > > > If anyone can show how it does significantly add realism or how to > > avoid the statistical errors, then let's keep the background traffic. > > Otherwise, I strongly prefer to remove it. > > I think it is important to keep some amount of background traffic and > reverse-path traffic in the scenarios about transients. Just for a > start, the background traffic strongly affects the degree of > synchronization in the loss events (do all flows have a loss at the same > time, or not), and this can strongly affect any of the metrics that are > being measured about the response to the transient events. Synchronization is certainly relevant if there is a significant amount of background traffic. Remember that if we have 10% of traffic being background traffic, then about 80% of the time the only flow in the system is the long-lived flow of interest. Consider a step increase in UDP traffic: a) If the transient occurs during the normal 80%, then we didn't need the cross traffic -- the flow of interest is perfectly synchronized with itself, and will definitely suffer a loss within the first RTT (and most likely each RTT until it drops its rate enough) when the UDP starts. b) If it occurs during the 20%, then we're measuring a statistical outlier, not the normal behaviour. That means that synchronization rate is not a strong reason to introduce cross traffic in this case. You mentioned that synchronization was "for a start". Could you list other reasons? The transient when the UDP is decrease will last many RTTs, and there *is* certainly a case for including cross traffic there. We could test how long it takes for the entire ensemble of flows to reach 80% of full utilization. However, there is also the risk that we might end up measuring the time until a new flow (i.e. slow start) arrives rather than the time until normal congestion avoidance fills the pipe. That would become very sensitive to our traffic model. (Halving the size of files and doubling the rate of arrivals would make a big difference.) Does anyone have any suggestion how to avoid that? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 10 Dec 2007 15:48:32 -0800 Subject: [Tmrg] Round-table PFLDnet submission - transient In-Reply-To: <475D043F.6020603@ens-lyon.fr> References: <20071209215603.v1gsa5smh4e8soc4@tadorne.ens-lyon.fr> <475D043F.6020603@ens-lyon.fr> Message-ID: Greetings Romaric, On 10/12/2007, Romaric Guillier wrote: > > However, the max decrease in one RTT isn't very informative. Should > > it be big or small? A flow which randomly sends 0 on odd RTTs and > > 10Gbps on even RTTs will have a very big "maximum decrease", but > > doesn't behave at all well. > > Yep, I agree with you on that, but if you consider the "time to reduce > to 33%" metric, you will get a result like one or less than one RTT, but > it won't tell you either that your protocol is unstable. True. I argued against the "time to reduce to x%" metric at the meeting, but haven't come up with anything better. The reason for making it 33% not 50% as originally proposed was an attempt to look beyond the first "reduction by 50%" that many algorithms have. To me, the "back-off" measure is about how much room is available for the newly-arriving flow, rather than the impact on the existing flow. That is one reason I don't like the "cost" in the original draft -- it only considers the impact on the long-lived flow, not the impact of that flow's response on other flows. Perhaps it would be better to measure the impact on other flows directly. How about the measure number of packets dropped by the UDP sources in the first x seconds If the flow backs off nicely, the number of dropped packets will be small. Similarly, virtual queue algorithms will be rewarded for not causing *any* drops when a small UDP flow starts. > If we want to measure the stability of a protocol when it is facing transient events, No, I wasn't specifically wanting to measure stability. I just think that the sustained back-off is more important than the peak backoff. Maximum is a very fragile statistic. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: romaric.guillier at ens-lyon.fr (Romaric Guillier) Date: Tue, 11 Dec 2007 09:54:34 +0100 Subject: [Tmrg] Round-table PFLDnet submission - transient In-Reply-To: References: <20071209215603.v1gsa5smh4e8soc4@tadorne.ens-lyon.fr> <475D043F.6020603@ens-lyon.fr> Message-ID: <475E504A.1020400@ens-lyon.fr> hi Lachlan Andrew wrote: > Greetings Romaric, > > On 10/12/2007, Romaric Guillier wrote: >>> However, the max decrease in one RTT isn't very informative. Should >>> it be big or small? A flow which randomly sends 0 on odd RTTs and >>> 10Gbps on even RTTs will have a very big "maximum decrease", but >>> doesn't behave at all well. >> Yep, I agree with you on that, but if you consider the "time to reduce >> to 33%" metric, you will get a result like one or less than one RTT, but >> it won't tell you either that your protocol is unstable. > > True. I argued against the "time to reduce to x%" metric at the > meeting, but haven't come up with anything better. The reason for > making it 33% not 50% as originally proposed was an attempt to look > beyond the first "reduction by 50%" that many algorithms have. and hope that we won't see new algorithms that will propose a reduction to 33% :) > To me, the "back-off" measure is about how much room is available for > the newly-arriving flow, rather than the impact on the existing flow. > That is one reason I don't like the "cost" in the original draft -- it > only considers the impact on the long-lived flow, not the impact of > that flow's response on other flows. Sorry, distortion of the reality due to the usual topic I'm studying. Of course, you are right, we shouldn't be favouring one part or the other of the problem. But as a transient event could be caused by anything like a routing change causing a sharp decrease of the available bandwidth, sudden congestion or signal power dropping for wireless connexions, I thought it was more interesting to focus on this one flow rather than on the load we generate to simulate the transient event. > Perhaps it would be better to > measure the impact on other flows directly. How about the measure > > number of packets dropped by the UDP sources in the first x seconds > > If the flow backs off nicely, the number of dropped packets will be > small. Similarly, virtual queue algorithms will be rewarded for not > causing *any* drops when a small UDP flow starts. That sounds nice. If the number is too big, the flow is too aggressive and might be dangerous. If the number is too small, well you probably won't get good performance but at least the protocol won't break the internet cheers Romaric From: lastewart at swin.edu.au (Lawrence Stewart) Date: Sat, 29 Dec 2007 13:22:52 +1100 Subject: [Tmrg] Modular/Pluggable TCP Congestion Control for FreeBSD Message-ID: <4775AF7C.8020407@swin.edu.au> Hi all, We've been involved in a research project to implement and test an emerging TCP congestion control algorithm under FreeBSD. As a part of this, we've put together a patch for FreeBSD 7.0-BETA4 that modularises the congestion control code in the TCP stack. It allows for new congestion control algorithms to be developed as loadable kernel modules. This improves FreeBSD's usefulness as a TCP research platform and makes it easier to customise the stack for specific scenarios like high bandwidth, long delay paths. There is an accompanying technical report "Light-Weight Modular TCP Congestion Control for FreeBSD 7" [1] that covers the design, features, kernel interface and usage of the framework. Also on our website is a beta release of a module that implements the H-TCP[2] congestion control algorithm proposed by the Hamilton Institute. We believe that modular congestion control is a worthwhile addition to FreeBSD. We've performed significant internal testing and there are currently no known issues or regressions with the implementation compared to a 'vanilla' FreeBSD 7.0-BETA4 kernel. We would welcome further review and testing from the wider community in the hope of getting this patch folded into FreeBSD 8-CURRENT. SIFTR [3], our tool for monitoring FreeBSD kernel TCP connection state, has also received a minor update to v1.1.5, with the addition of 6 new, useful variables. All code and documentation is available on our website[3]. Cheers, Jim and Lawrence http://caia.swin.edu.au [1] http://caia.swin.edu.au/reports/071218A/CAIA-TR-071218A.pdf [2] http://www.hamilton.ie/net/htcp3.pdf [3] http://caia.swin.edu.au/urp/newtcp/tools.html From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sat, 29 Dec 2007 13:37:44 -0800 Subject: [Tmrg] Metrics for releasing bandwidth Message-ID: Greetings TMRGers, In a transient when new flows start (particularly non-rate-controlled "UDP" sources), how can we measure how quickly a TCP algorithm backs off? One suggestion was measuring how long it takes until it halves its window. Issues with that metric are: - if the UDP flow is less than 50% of the bandwidth, the flow can respond ideally and instantly without ever halving its window. - it ignores how quickly the TCP flow increases its window again after the reduction - it focuses on the experience of the TCP flow, not the impact on the UDP flow - different TCP flows may take different amounts of time to experience their first loss, especially if the new load arrives gradually as in a flash crowd. As an alternative, I propose measuring the number of packets dropped by the UDP flow(s) from when it starts to 10(?) seconds after it reaches its peak rate. This - is meaningful for any rate of UDP cross traffic - captures the entire duration of the transient, not just the start - applies equally well if there are many TCP flows, or the UDP rate increases gradually. Unfortunately, this has a "magic number" of 10s. Can anyone see other flaws, or better metrics? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 7 Jan 2008 14:07:42 -0800 Subject: [Tmrg] (limited) measurement of file size vs congestion level Message-ID: Greetings all, During the discussion of what traffic parameters to change to test different levels of congestion, we noted the lack of measurement of file sizes w.r.t. congestion level. The measurements reported in Table I of K Shah, S Bohacek "High short-term bit-rates from TCP Flows," MASCOTS, 2005 suggest strongly that increased congestion comes from increased mean file size, rather than simply increased arrival rate. For different link utilisations, here are the mean file sizes, and number of flows per hour: 4%: 2.85 kByte 16.8M 5.4%: 4.85 kByte 12.8M 25%: 9.80 kByte 29.2M 35%: 10.1 kByte 39.5M 48%: 13.2 kByte 41.2M This suggests that to increase congestion by a factor of x, we should increase both the arrival rate and the mean file size by a factor of sqrt(x). Thoughts? Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 7 Jan 2008 19:00:19 -0800 Subject: [Tmrg] (limited) measurement of file size vs congestion level In-Reply-To: References: Message-ID: <85e556a7e9ab3c208a851e253133f216@mac.com> Lachlan - > This suggests that to increase congestion by a factor of x, we > should increase both the arrival rate and the mean file size by a > factor of sqrt(x). That sounds reasonable to me. (While there has been a fair amount of research on characterizing aggregate traffic in the Internet, there hasn't been much about aggregate traffic on congested links. Not just with respect to the level of congestion, but the type or bandwidth of the congested link, etc. But maybe that will come.) - Sally http://www.icir.org/floyd/ On Jan 7, 2008, at 2:07 PM, Lachlan Andrew wrote: > Greetings all, > > During the discussion of what traffic parameters to change to test > different levels of congestion, we noted the lack of measurement of > file sizes w.r.t. congestion level. > > The measurements reported in Table I of > K Shah, S Bohacek "High short-term bit-rates from TCP Flows," > MASCOTS, 2005 > > suggest strongly that increased congestion comes from increased mean > file size, rather than simply increased arrival rate. > > For different link utilisations, here are the mean file sizes, and > number of flows per hour: > 4%: 2.85 kByte 16.8M > 5.4%: 4.85 kByte 12.8M > 25%: 9.80 kByte 29.2M > 35%: 10.1 kByte 39.5M > 48%: 13.2 kByte 41.2M > > This suggests that to increase congestion by a factor of x, we > should increase both the arrival rate and the mean file size by a > factor of sqrt(x). > > Thoughts? > > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > http://netlab.caltech.edu/~lachlan > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 7 Jan 2008 19:56:29 -0800 Subject: [Tmrg] (limited) measurement of file size vs congestion level In-Reply-To: References: Message-ID: Lachlan - As an additional comment, my assumption would be that users' behavior is *heavily* affected by the level of congestion, but I don't know that much research has been done on this. That is, I would assume that users would cut their Internet browsing short in times of heavy congestion (i.e., of slow response times and long download times), in terms of the number of TCP connections initiated. I wouldn't have had a good guess, however, what this would mean to the distribution of connection sizes. It might mean different things on different type links. - Sally http://www.icir.org/floyd/ From: ricciato at ftw.at (ricciato) Date: Tue, 08 Jan 2008 10:44:31 +0100 Subject: [Tmrg] (limited) measurement of file size vs congestion level In-Reply-To: References: Message-ID: <478345FF.6050306@ftw.at> Hi Sally, all [I am new to the list] just an humble comment on the relationship between congestion and user behaviour. 1. there is a very preliminary study [1] investigating the issue 2. we have found and reported in [2] a case of severe congestion on "our" network, i.e. a mobile UMTS network that we extensively and constantly monitor in our project [3]. There we show that the presence of the bottleneck changes the statistics of the aggregate traffic, of course. An analysis of the impact of user-behaviour (change in file-size, abandoning, re-clicks, etc.) based on the detailed packet-traces was always in our to-do list, but so far had not time to work that out (there are some complications in doing user-level analysis, e.g. the file-size does not correspond to the TCP connection size, as the relationship file:connections is NOT 1:1 in modern applications, we, p2p etc.) 3. However, based on our experience (we have seen many severe congestion events in this network), I can report the following qualitative observations A. the user abandoning process seems to be "with threshold" : if you consider the frequency of TCP RST as a gross indicator of user (or server) impatience, we saw that for mild congestion (right before the peak hour, on a congested link) the RST stay at physiological level (pretty low), while it sharply jumps to abnormally high values when the congestion becomes severe (during the peak hour) B. if you look at the distribution of the number of packets downloaded by each users in fixed timebins (e.g. 1 min), you see that after a capacity upgrade that removes a congestion points, such distribution changes, with more user downloading more packets (as expected). 4. My expectation is that the users regulate the duration of the *session* and the total download rate (often across multiple parallel TCP connection) based on the experienced response time, there it is the *session* attributes (duration, rate), rather than the *file* ones, that are dependent on the congestion level. At the TCP level, this might means that it?s the connection arrival process that is mostly impacted, rtaher than the size (the latter is probably affected only in the tail of long files, which are probably truncated upon congestion). Furthermore, after a certain threshold (severe congestion), users or servers suddenly get crazy and start to reclick/reset the downloads, and eventually give up the session. ciao fabio [1] "User patience and the Web: a hands-on investigation", by Rossi, Casetti, Mellia, @ Globecom 2003. [2] F. Ricciato, F. Vacirca, P. Svoboda, Diagnosis of Capacity Bottlenecks via Passive Monitoring in 3G Networks: an Empirical Analysis, Computer Networks, vol. 51, n.4, pp. 1205-1231, March 2007 [3] http://userver.ftw.at/~ricciato/darwin/ Sally Floyd wrote: > Lachlan - > > As an additional comment, my assumption would be that users' behavior > is *heavily* affected by the level of congestion, but I don't know > that much research has been done on this. That is, I would assume > that users would cut their Internet browsing short in times of heavy > congestion (i.e., of slow response times and long download times), > in terms of the number of TCP connections initiated. > > I wouldn't have had a good guess, however, what this would mean to > the distribution of connection sizes. It might mean different > things on different type links. > > - Sally > http://www.icir.org/floyd/ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > Sally Floyd wrote: > Lachlan - > > As an additional comment, my assumption would be that users' behavior > is *heavily* affected by the level of congestion, but I don't know > that much research has been done on this. That is, I would assume > that users would cut their Internet browsing short in times of heavy > congestion (i.e., of slow response times and long download times), > in terms of the number of TCP connections initiated. > > I wouldn't have had a good guess, however, what this would mean to > the distribution of connection sizes. It might mean different > things on different type links. > > - Sally > http://www.icir.org/floyd/ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > From: mascolo at poliba.it (Saverio Mascolo) Date: Tue, 22 Jan 2008 19:28:11 +0100 Subject: [Tmrg] Implementation of XCP and similar feedback from routers ... Message-ID: <01b901c85d24$a97804c0$723bccc1@HPSM> dear all, i would like to ask who knows on the state of implementation, in commercial routers, of mechanisms to provide feedback to improve e2e congestion control such as XCP and similar. thanks for the attention, Saverio Mascolo, Dipartimento di Elettrotecnica ed Elettronica Politecnico di Bari -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20080122/6b0887db/attachment.html From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 6 Feb 2008 21:22:59 -0800 Subject: [Tmrg] Mix of RTTs Message-ID: Greetings Sally, I have a question about the connection between the traffic model and RTTs to use in TCP analysis. When the "better models" paper compares the simulated and measured RTT distributions, it mentions that most packets come from the short-RTT flows. That will clearly be the case if all flows are long-lived, or if the traffic model is "closed loop" in the sense that it consists only of alternating "think times" and fixed-time files. If the traffic consists instead of Poisson arrivals of "sessions", each carrying a fixed amount of traffic (possibly in several think/send bursts), then the amount of data sent at each RTT is determined by the traffic model, independent of the actual RTTs. At the round table, we agreed to have a traffic model of the second kind. Will that change the RTTs that we should use in the test suite? As I recall, you wanted to revise that section before the final submission anyway. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Wed, 13 Feb 2008 17:21:31 -0800 Subject: [Tmrg] Towards a Common TCP Evaluation Suite Message-ID: <2FE95FF6-9F61-45D1-9D2E-9BE2D1637B2E@mac.com> Some of us (nine co-authors) have submitted a draft paper to PFLDnet 2008 on "Towards a Common TCP Evaluation Suite", and the draft paper has been accepted. This paper grew out of a workshop organized by Lachlan Andrew at CalTech last November. The draft paper is available from "http://www.icir.org/floyd/papers/pfldnet2008-draft.pdf". We are revising the paper now, and the final version is due on February 22. Any feedback would be welcome. - Sally (one of the nine co-authors) http://www.icir.org/floyd/ From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 18 Feb 2008 18:17:50 -0800 Subject: [Tmrg] (limited) measurement of file size vs congestion level In-Reply-To: <478345FF.6050306@ftw.at> References: <478345FF.6050306@ftw.at> Message-ID: <1FC26B49-CC2D-40C7-A034-1C4541050C86@mac.com> Fabio - Many thanks for the report. - Sally ... > 3. However, based on our experience (we have seen many severe > congestion events in this network), I can report the following > qualitative observations > > A. the user abandoning process seems to be "with threshold" : > if you consider the frequency of TCP RST as a gross indicator of > user (or server) impatience, we saw that for mild congestion (right > before the peak hour, on a congested link) the RST stay at > physiological level (pretty low), while it sharply jumps to > abnormally high values when the congestion becomes severe (during > the peak hour) > > B. if you look at the distribution of the number of packets > downloaded by each users in fixed timebins (e.g. 1 min), you see > that after a capacity upgrade that removes a congestion points, > such distribution changes, with more user downloading more packets > (as expected). > > > 4. My expectation is that the users regulate the duration of the > *session* and the total download rate (often across multiple > parallel TCP connection) based on the experienced response time, > there it is the *session* attributes (duration, rate), rather than > the *file* ones, that are dependent on the congestion level. At the > TCP level, this might means that it?s the connection arrival > process that is mostly impacted, rtaher than the size (the latter > is probably affected only in the tail of long files, which are > probably truncated upon congestion). > Furthermore, after a certain threshold (severe congestion), users > or servers suddenly get crazy and start to reclick/reset the > downloads, and eventually give up the session. ... > [1] "User patience and the Web: a hands-on investigation", by > Rossi, Casetti, Mellia, @ Globecom 2003. > [2] F. Ricciato, F. Vacirca, P. Svoboda, Diagnosis of Capacity > Bottlenecks via Passive Monitoring in 3G Networks: an Empirical > Analysis, Computer Networks, vol. 51, n.4, pp. 1205-1231, March 2007 > [3] http://userver.ftw.at/~ricciato/darwin/ - Sally http://www.icir.org/floyd/ From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 18 Feb 2008 19:35:55 -0800 Subject: [Tmrg] Mix of RTTs In-Reply-To: References: Message-ID: Lachlan - (Getting to old email...) > I have a question about the connection between the traffic model and > RTTs to use in TCP analysis. > > When the "better models" paper compares the simulated and measured RTT > distributions, it mentions that most packets come from the short-RTT > flows. That will clearly be the case if all flows are long-lived, or > if the traffic model is "closed loop" in the sense that it consists > only of alternating "think times" and fixed-time files. > > If the traffic consists instead of Poisson arrivals of "sessions", > each carrying a fixed amount of traffic (possibly in several > think/send bursts), then the amount of data sent at each RTT is > determined by the traffic model, independent of the actual RTTs. I don't understand this. Assume Poisson arrivals of sessions, each carrying a fixed amount of traffic. The amount of data sent in each RTT is determined by the end-to-end congestion control. For TCP, where in congestion avoidance a flow increases its sending rate by one packet per RTT, short-RTT flows send at a much higher sending rate *in packets per second* than do long-RTT flows, given the same packet drop rates for the two flows. I agree that we want a traffic model of Poisson arrivals of sessions, each carrying a fixed amount of traffic (from a heavy-tailed distribution). > At the round table, we agreed to have a traffic model of the second > kind. Will that change the RTTs that we should use in the test suite? Figure 5 from the Internet Research Needs Better Models paper has most of the traffic on the second kind above (from the traffic generator in ns-2, with Poisson arrivals of sessions, and heavy-tailed distributions of file sizes, along with other parameters), though there are a few long-lived flows in Figure 5 of that paper. The simulation was run for 100 seconds of simulation time, with packet drop rates over the second half of the simulation of roughly 3%. I would assume that at the end of the 100 seconds, the long-RTT flows had more unfilled demand that the short-RTT flows. > As I recall, you wanted to revise that section before the final > submission anyway. The Internet Research Needs Better Models paper used RTTs that were uniformly distributed between 20 and 460 ms, in the absence of queueing delay. Table 1 of the PFLDnet paper now gives RTTs in the range of 4 to 200 ms, in the absence of queueing delay. It has varied over time - for revision 1.13, Table 1 had a range of RTTs from 0 to 100 ms. In revision 1.14, Table 1 was changed to have a range of RTTs from 0 to 200 ms. In revision 1.15, this was changed (perhaps by me) to have a range of RTTs from 4 to 400 ms. In revision 1.18, this was changed back to a range of RTTs from 4 to 200 ms. The range of RTTs up to 400 ms seems the most realistic to me, for the default scenario, but I could live with a range up to 200 ms, for the first pass at the scenarios. Perhaps it was changed back because 200 ms is easier for testbeds than 400 ms? I don't remember, and it is impossible to tell from the logs who made which change. - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 18 Feb 2008 20:17:38 -0800 Subject: [Tmrg] Mix of RTTs In-Reply-To: References: Message-ID: Greetings Sally, On 18/02/2008, Sally Floyd wrote: > > If the traffic consists instead of Poisson arrivals of "sessions", > > each carrying a fixed amount of traffic (possibly in several > > think/send bursts), then the amount of data sent at each RTT is > > determined by the traffic model, independent of the actual RTTs. > > I don't understand this. Assume Poisson arrivals of sessions, each > carrying a fixed amount of traffic. The amount of data sent in > each RTT is determined by the end-to-end congestion control. Yes. My wording was confusing. When I said "data sent *at* each RTT", I meant "data eventually sent by flows having a particular RTT", not "data in a particular interval of duration one RTT". Each individual long-RTT flow will transmit slower, but as a result, there will be more of them in the system. The total data (eventually) sent by these flows equals the sum of the file sizes which arrive, regardless of how slowly they are sent. (Of course, this only applies exactly if the time scale of the simulation is long compared to one flow transfer time, but that is the way the real world is.) > For > TCP, where in congestion avoidance a flow increases its sending > rate by one packet per RTT, short-RTT flows send at a much higher > sending rate *in packets per second* than do long-RTT flows, given > the same packet drop rates for the two flows. Agreed. The rate that an individual long-RTT flow sends will be lower, but this is balanced by the fact that it keeps sending for longer. > > At the round table, we agreed to have a traffic model of the second > > kind. Will that change the RTTs that we should use in the test suite? > > Figure 5 from the Internet Research Needs Better Models paper ... > I would assume that at the end of the 100 seconds, the long-RTT > flows had more unfilled demand that the short-RTT flows. Yes, the long flows will have more unfilled demand. However, if the simulation has been run long enough, the unfilled demand of the remaining flows will be a small fraction of the total data. This gets back to our discussion about whether it is meaningful to study time-average properties of systems which haven't yet reached equilibrium. The reason for taking measurements only over the second half is to avoid non-equilibrium effects, isn't it? If so, it would be consistent to wait until the system actually has reached equilibrium. Otherwise, time averages are misleading quantities. Do you agree? At a random point in the real world, the each long RTT flow will be about half-finished, just like each short RTT flow will be. If the long RTT flows have more unsent data in the real world, it is because there are more of them. > > As I recall, you wanted to revise that section before the final > > submission anyway. > > The Internet Research Needs Better Models paper used RTTs that > were uniformly distributed between 20 and 460 ms, in the > absence of queueing delay. Table 1 of the PFLDnet paper > now gives RTTs in the range of 4 to 200 ms, in the absence > of queueing delay. Your email below suggests that it was a little more complicated than uniform [20,460]. I interpreted it as saying that most of the traffic was [0,220], but I could have misunderstood. If you're happy with the current text, we'll just go with it. Cheers, Lachlan On 11/12/2007, Sally Floyd wrote: > > You're right that the delays don't match those in the paper very well. > > Our reference was the link you sent in November. As I commented on 3 > > December, there seems to be a discrepancy between the paper and the > > scripts on the web which purport to have produced those graphs. Since > > the paper didn't have values and we didn't hear your response to my > > query, we used the values in the scripts we were pointed to. > > > > Are you sure that the scripts on the web are the ones used? > > Yep. But it turns out that the topology in the scripts is more > complicated that I remembered. > > In the scripts, there are two sets of access links. > > The access links for the long-lived traffic are as follows: > $ns duplex-link $node_(s$i) $node_(r1) 100Mb [expr $delay2]ms > DropTail > $ns duplex-link $node_(k$i) $node_(r2) 100Mb [expr $delay2a]ms > DropTail > for > set delay2 [expr 2*$opt(secondDelay)*((($i+3)%10)/9.0)] > set delay2a [expr 2*$opt(secondDelay)*((($i+3)%10)/9.0)] > and secondDelay set to 55 ms. > > This gives one-way propagation delays for each of the access links > for the long-lived traffic of [0,110] ms, giving RTTs for the > long-lived traffic, > in the absence of queueing delay and the small delay for the central > link, > equally distributed in [0, 440] ms. > > The access delays for the web traffic are as follows: > $ns duplex-link $s_($i) $node_(r1) 2000Mb $x DropTail > $ns duplex-link $r_($i) $node_(r2) 2000Mb $y DropTail > for > set x [expr $bdel*((($i+3)%10)/9.0)]ms > set y [expr $bdel*((($i+3)%10)/9.0)]ms > and bdel set to 55 ms. > > This gives one-way propagation delays for each of the access links > for the web traffic of [0,55] ms, giving RTTs, in the absence of > queueing > delay etc., equally distributed in [0, 220] ms. > > I don't think that this difference between the RTTs for the long-lived > traffic and the web traffic was on purpose. > > Figure 5 was run with a range of web traffic and long-lived traffic, > but dominated by web traffic: > ./ns sims.tcl -flows 18 -web 400 -rtts 1 -title two > two.data > > I just reran the simulations, one with RTTs for the web traffic > equally distributed in [0, 220] ms., as used for Figure 5 in the paper, > and the other with RTTs for the web traffic equally distributed > in [0, 440] ms. This first one matched the experimental data > better, so I will change the one-way propagation delays for the > access links in the paper to give RTTs of [0, 220] ms. > > (I am assuming that everything in this first draft is subject to > change as we learn more from measurements and from running > the simulations and experiments....) > > > On 03/12/2007, Lachlan Andrew wrote: > >> Greetings Sally, > >> > >> On 26/11/2007, Sally Floyd wrote: > >>> Cesar wrote > >>>> 1) the RTTs of the access links for the dumbbell scenarios > >>>> In this topic, I read the paper by Sally and Kohler about > >>>> "Internet Research Needs Better Models". > >>>> "http://www.icir.org/models/hotnetsFinal.pdf" > >>> > >>> For the scenario in that paper, the flows are distributed evenly > >>> over all of the nine links pairs (that is, those pairs that have > >>> one link on the left, and one link on the right). The simulation > >>> scripts are available from "http://www.icir.org/models/sims.html". > >> > >> I've tried reading the scripts at > >> , and really can't > >> see what RTTs the links used. (I'm not very fluent at TCL.) > >> > >> It seems to me that the number of web nodes created in > >> add_web_traffic is numWeb=10, and the RTTs seem to be drawn from > >> a discretized uniform distribution [generated by > >> $bdel*((($i+3)%10)/9.0] with a maximum value of > >> opt(secondDelay)=55ms. That doesn't mesh with the maximum RTT of > >> 460ms in the paper. > >> > >> As I recall, at the meeting you offered to find the RTTs which would > >> match a measured distribution. As a short-cut for the PFLDnet > >> abstract, could you please let us know what link delays were used in > >> the "better models" paper? > >> > >> Thanks, > >> Lachlan > > - Sally > http://www.icir.org/floyd/ > > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Thu, 21 Feb 2008 16:42:10 -0800 Subject: [Tmrg] Mix of RTTs In-Reply-To: References: Message-ID: Lachlan - On Feb 18, 2008, at 8:17 PM, Lachlan Andrew wrote: > Greetings Sally, > > On 18/02/2008, Sally Floyd wrote: >>> If the traffic consists instead of Poisson arrivals of "sessions", >>> each carrying a fixed amount of traffic (possibly in several >>> think/send bursts), then the amount of data sent at each RTT is >>> determined by the traffic model, independent of the actual RTTs. >> >> I don't understand this. Assume Poisson arrivals of sessions, each >> carrying a fixed amount of traffic. The amount of data sent in >> each RTT is determined by the end-to-end congestion control. > > Yes. My wording was confusing. When I said "data sent *at* each > RTT", I meant "data eventually sent by flows having a particular RTT", > not "data in a particular interval of duration one RTT". > > Each individual long-RTT flow will transmit slower, but as a result, > there will be more of them in the system. The total data (eventually) > sent by these flows equals the sum of the file sizes which arrive, > regardless of how slowly they are sent. (Of course, this only applies > exactly if the time scale of the simulation is long compared to one > flow transfer time, but that is the way the real world is.) Actually, the *real world* contains users whose behavior is a function of congestion and download times experienced so far. And in the real world (with current TCP), users over connections with longer RTTs have much slower download times that users over connections with shorter RTTs. And therefore will download less. But since our simulations and experiments don't yet have user behavior sensitive to past congestion and to past download times, this doesn't happen in our simulations and experiments... >> For >> TCP, where in congestion avoidance a flow increases its sending >> rate by one packet per RTT, short-RTT flows send at a much higher >> sending rate *in packets per second* than do long-RTT flows, given >> the same packet drop rates for the two flows. > > Agreed. The rate that an individual long-RTT flow sends will be > lower, but this is balanced by the fact that it keeps sending for > longer. > >>> At the round table, we agreed to have a traffic model of the second >>> kind. Will that change the RTTs that we should use in the test >>> suite? >> >> Figure 5 from the Internet Research Needs Better Models paper > ... >> I would assume that at the end of the 100 seconds, the long-RTT >> flows had more unfilled demand that the short-RTT flows. > > Yes, the long flows will have more unfilled demand. However, if the > simulation has been run long enough, the unfilled demand of the > remaining flows will be a small fraction of the total data. Yep, if the average load is less than 100%. If the average load is greater than 100%, then the unfilled demand increases and increases, the longer we run the simulation, with a lot of the unfilled demand from the longer-RTT flows. > This gets back to our discussion about whether it is meaningful to > study time-average properties of systems which haven't yet reached > equilibrium. The reason for taking measurements only over the second > half is to avoid non-equilibrium effects, isn't it? If so, it would > be consistent to wait until the system actually has reached > equilibrium. Otherwise, time averages are misleading quantities. Do > you agree? For me, the reason to take measurements over the second half of the experiment is to avoid the odd and atypical period in the beginning of the simulation when all flows are slow-starting at the same time. But personally, I am perfectly happy to run simulations for finite, specified time periods when the average load is greater than 100%, and there is no equilibrium. (In fact, I think it is probably quite necessary, if one wants scenarios with higher levels of congestion over the lifetime of the simulation.) > At a random point in the real world, the each long RTT flow will be > about half-finished, just like each short RTT flow will be. If the > long RTT flows have more unsent data in the real world, it is because > there are more of them. > >>> As I recall, you wanted to revise that section before the final >>> submission anyway. >> >> The Internet Research Needs Better Models paper used RTTs that >> were uniformly distributed between 20 and 460 ms, in the >> absence of queueing delay. Table 1 of the PFLDnet paper >> now gives RTTs in the range of 4 to 200 ms, in the absence >> of queueing delay. > > Your email below suggests that it was a little more complicated than > uniform [20,460]. I interpreted it as saying that most of the traffic > was [0,220], but I could have misunderstood. If you're happy with the > current text, we'll just go with it. Ah dear, I had forgotten about that email. Yep, I am happy with the current text. Take care, - Sally http://www.icir.org/floyd/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20080221/3491d8c2/attachment.html From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Thu, 21 Feb 2008 17:17:46 -0800 Subject: [Tmrg] Mix of RTTs In-Reply-To: References: Message-ID: Greetings Sally, Thanks for your reply. On 21/02/2008, Sally Floyd wrote: > > Actually, the *real world* contains users whose behavior is a function > of congestion and download times experienced so far. And in the real > world (with current TCP), users over connections with longer RTTs > have much slower download times that users over connections with > shorter RTTs. And therefore will download less. > > But since our simulations and experiments don't yet have user > behavior sensitive to past congestion and to past download times, > this doesn't happen in our simulations and experiments... True. However, we can easily model "users with long RTTs choose to download less" in a way which doesn't need their behaviour to reflect actual experience. We can just choose the load at each RTT. > > Yes, the long flows will have more unfilled demand. However, if the > > simulation has been run long enough, the unfilled demand of the > > remaining flows will be a small fraction of the total data. > > Yep, if the average load is less than 100%. If the average load is > greater than 100%, then the unfilled demand increases and > increases, the longer we run the simulation, with a lot > of the unfilled demand from the longer-RTT flows. True. In the "better models" paper, were the RTT comparison tests run at over 100% load? I would have thought that comparing the RTT distribution at a load which lets all the traffic through would be the natural setting. > For me, the reason to take measurements over the second half > of the experiment is to avoid the odd and atypical period in the > beginning of the simulation when all flows are slow-starting at > the same time. Yes, that is certainly the biggest artefact to avoid. > But personally, I am perfectly happy to run > simulations for finite, specified time periods when the average > load is greater than 100%, and there is no equilibrium. > (In fact, I think it is probably quite necessary, if one wants scenarios > with higher levels of congestion over the lifetime of the simulation.) OK. I'll get back to you on this when/if I try some simulations which start in equilibrium... > Yep, I am happy with the current text. Great. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/~lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Thu, 21 Feb 2008 18:05:43 -0800 Subject: [Tmrg] Mix of RTTs In-Reply-To: References: Message-ID: Lachlan - >>> Yes, the long flows will have more unfilled demand. However, if the >>> simulation has been run long enough, the unfilled demand of the >>> remaining flows will be a small fraction of the total data. >> >> Yep, if the average load is less than 100%. If the average load is >> greater than 100%, then the unfilled demand increases and >> increases, the longer we run the simulation, with a lot >> of the unfilled demand from the longer-RTT flows. > > True. In the "better models" paper, were the RTT comparison tests run > at over 100% load? I would have thought that comparing the RTT > distribution at a load which lets all the traffic through would be the > natural setting. For the "better models" paper, I didn't calculate the load. (The simulation scripts are on-line at "http://www.icir.org/models/sims.html", but there is a mix of long-lived traffic and traffic from the traffic generator. And the traffic from the traffic generator is specified by specifying the session arrival rate, the average connection size in packets, etc.) You are of course right that the distribution of RTTs shown in Figure 5 of the "better models" paper would be a function of the level of load... - Sally http://www.icir.org/floyd/ From: ldunn at cisco.com (Lawrence D. Dunn) Date: Mon, 10 Mar 2008 12:44:01 -0600 Subject: [Tmrg] Towards a Common TCP Evaluation Suite In-Reply-To: References: <5D977804-59A9-4A06-8E71-4C88AF56FA50@csnet.cs.odu.edu> <56D7B140-63DE-4560-91E6-46C70F7A4583@mac.com> Message-ID: Lachlan, Here are my "raw notes" taken during your talk... Larry -- > 4:01p Lachlan A. > Towards a Common TCP Evauation Suite Lachlan Andrew1, Cesar Marcondes2, Sally Floyd3, Lawrence Dunn4, Romaric Guillier5, Wang Gang6, Lars Eggert7,Sangtae Ha8, Injong Rhee8, 1 Caltech, 2 UCLA, 3 ICSI, 4 Cisco Systems, 5 INRIA, 6 NEC China, 7 Nokia, 8 NCSU > He mentioned this was in part, in response to Dunn's "Injong and Doug agree on results" goal... > and reduce arguments, need to repeat experiments which may/may not match someone else's > want them to be as realistic as possible; not a "benchmark", as sally points out people will tune to meet the benchmark > balance reductionism (doug's 1-variable at a time) with realism > design space > start w/ simulator, or testbed > choose topology (delay, bitrate) > flows under test > cross traffic > metrics to track > comments > richard- convergence is the wrong name; please be precise in terminology- like suppressing slowstart is not the same as "a new flow starts..." > is 80% for transients misleading? maybe a spread of points, to help characterize shape a bit? > richard- on impact on reno test- endstations are uncontrollable, and he's afraid things will sync or end up in lockstep due to endstation effects; lachlan- we watch experimental loading; light host links/cpu, tighter bottleneck > At 10:21 AM -0700 3/10/08, Lachlan Andrew wrote: >Greetings all, > >The TCP test suite presentation at PFLDnet received a lot of useful >feedback. I took some notes afterwards, but missed many of the questions. >Could those who raised issues during the talk please repeat them to the list? >(The slides are at <...>.) > >The following are the comments I remember, and my responses. > >- For the RTT distribution, why limit it to the nine discrete values, > when a modified dummynet can give per-flow delays? > > I said that it was to make it more platform-independent. However, > the point remains: Why 6 nodes, not 4 or 8? We'll have to revisit > this when we look at whether these link delays give the right RTT > distribution when long-RTT flows complete. > >- What is "real"? How do we know that the "realistic" cases we study > are better than simpler ones? > > We should probably quote measurement studies to back things up. > I would add that we must justify that the benefits of capturing any > given real effect is worth the cost, for example in terms of making > results potentially misleading due to statistical effects, or > difficult to interpret for other reasons. > >- It is not "realistic" to assume that the newly arriving flow misses > slow start. > > I still think it is a useful case to consider, but we should justify it. > If we say it is the "worst case", then we'd have to justify not applying > the same logic to losses due to cross traffic. The worst case is if > there are none during the transient, which I think would happen more > often than a flow exiting slow start on the first RTT. > > If we have time, it would be interesting to see the sensitivity of > convergence time to cross traffic; instead of one test with 10%, > we could have one with 0% and one with 30%. > >- The "convergence time" section needs a new name > >- When we have multiple flows being generated by a single host, we need > to avoid them getting into lock-step because of host issues. > > I think that other processing issues will keep us to fairly low rates > (<= 622M?) in testbeds, at least Linux ones, and so this shouldn't > be a problem. > >- For the 3-hop topology in which each flow gets 60ms delay, we currently > specify having seven delay elements. It is possible to achieve this > with only two delay elements (one of the "access" links, and one of > the bottleneck links). > > Are these equivalent? They both result in the same RTTs; most TCP > throughput models only care what the RTTs, not which links it occurs on. > We can't do the same with shifting buffers around, but I think we can > shift propagation delays around freely, can't we? > > If they're not equivalent, could we standardize on the one with > two delays instead of seven? It would mean we only need 4 dummynets > instead of 7. > >Also, Lars suggested that all authors of the PFLDnet paper become >acknowledgements, and that we build up a new set of authors based on >who keeps the suite moving, independent of who was at the round table. > >Cheers, >Lachlan > > > >-- >Lachlan Andrew Dept of Computer Science, Caltech >1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >http://netlab.caltech.edu/lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 17 Mar 2008 14:43:11 -0700 Subject: [Tmrg] [Iccrg] TCP evaluation suite round-table In-Reply-To: <47DE986B.9050501@cis.udel.edu> References: <47DE986B.9050501@cis.udel.edu> Message-ID: Greetings Preethi, On 17/03/2008, Preethi Natarajan wrote: > > Are there any recommendations for cross-traffic generation/Tmix > "connection vector" parameters used in "Towards a Common TCP Evaluation > Suite"?. > Specifically, any pointers to details such as the number of > FTP/HTTP/voice sessions to use, "think" time for HTTP sessions etc. Not yet. The plan is to produce standard connection vectors, probably some snippets from the actual traces that the Tmix team have been using. We'll keep you informed of any developments. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 18 Mar 2008 09:48:36 -0700 Subject: [Tmrg] Towards a Common TCP Evaluation Suite In-Reply-To: References: <5D977804-59A9-4A06-8E71-4C88AF56FA50@csnet.cs.odu.edu> <56D7B140-63DE-4560-91E6-46C70F7A4583@mac.com> Message-ID: <19B51173-135B-4D82-BF98-086118F2599E@mac.com> Lachlan - > > - For the 3-hop topology in which each flow gets 60ms delay, we > currently > specify having seven delay elements. It is possible to achieve this > with only two delay elements (one of the "access" links, and one of > the bottleneck links). > > Are these equivalent? They both result in the same RTTs; most TCP > throughput models only care what the RTTs, not which links it > occurs on. > We can't do the same with shifting buffers around, but I think we can > shift propagation delays around freely, can't we? > > If they're not equivalent, could we standardize on the one with > two delays instead of seven? It would mean we only need 4 dummynets > instead of 7. I think it makes more sense to specify the topology as it is specified now in Section G ("a ?parking-lot? topology with three (horizontal) bottleneck links and four (vertical) access links"), and then to add that in a testbed, this can be *implemented* with only N delay elements. Because we want the scenarios to be useful for simulators as well as testbeds, and some simulators don't use delay elements. And it makes it more clear the real-world topology that corresponds to our simulation or experiment. - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 18 Mar 2008 10:12:33 -0700 Subject: [Tmrg] Towards a Common TCP Evaluation Suite In-Reply-To: <19B51173-135B-4D82-BF98-086118F2599E@mac.com> References: <5D977804-59A9-4A06-8E71-4C88AF56FA50@csnet.cs.odu.edu> <56D7B140-63DE-4560-91E6-46C70F7A4583@mac.com> <19B51173-135B-4D82-BF98-086118F2599E@mac.com> Message-ID: Greetings Sally, On 18/03/2008, Sally Floyd wrote: > > I think it makes more sense to specify the topology as it is specified > now in Section G ("a "parking-lot" topology with three (horizontal) > bottleneck > links and four (vertical) access links"), and then to add that in a > testbed, this can be *implemented* with only N delay elements. I wasn't proposing a change to the arrangements of links, just the allocation of delays on those links. The description above describes both delay allocations equally well. The only distinction is whether all links have equal delay, or two have large delays and the rest have negligible delays. If the two delay arrangements are equivalent (as I believe they are) then I agree it is good to describe the symmetric one and note that the asymmetric one is equivalent. It would only become awkward if there is a useful metric for which they give different results. > Because we want > the scenarios to be useful for simulators as well as testbeds, and some > simulators don't use delay elements. And it makes it more clear the > real-world topology that corresponds to our simulation or experiment. I don't quite understand either of these points. - If a simulator has no delay elements then I don't understand how it can implement either topology, because the both have delays. In ns, a delay element is just a link with non-negligible delay. - It isn't clear that a completely symmetric case is more real-world than a highly asymmetric case; both sound like "useful oversimplifications" to me. (I'm not meaning to be argumentative; we agree that it is best to present the symmetric case as the nominal network.) Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 18 Mar 2008 14:33:18 -0700 Subject: [Tmrg] Towards a Common TCP Evaluation Suite In-Reply-To: References: <5D977804-59A9-4A06-8E71-4C88AF56FA50@csnet.cs.odu.edu> <56D7B140-63DE-4560-91E6-46C70F7A4583@mac.com> <19B51173-135B-4D82-BF98-086118F2599E@mac.com> Message-ID: <15CDB731-42C9-430A-832A-B21A1824AE40@mac.com> On Mar 18, 2008, at 10:12 AM, Lachlan Andrew wrote: > Greetings Sally, > > On 18/03/2008, Sally Floyd wrote: >> >> I think it makes more sense to specify the topology as it is >> specified >> now in Section G ("a "parking-lot" topology with three (horizontal) >> bottleneck >> links and four (vertical) access links"), and then to add that in a >> testbed, this can be *implemented* with only N delay elements. > > I wasn't proposing a change to the arrangements of links, just the > allocation of delays on those links. The description above describes > both delay allocations equally well. The only distinction is whether > all links have equal delay, or two have large delays and the rest have > negligible delays. > > If the two delay arrangements are equivalent (as I believe they are) > then I agree it is good to describe the symmetric one and note that > the asymmetric one is equivalent. It would only become awkward if > there is a useful metric for which they give different results. My assumption was this topology: A ------ B ------ C ------ D with four access links, A-E, B-F, C-G, D-H. (I didn't draw the access links, because I don't trust mail readers to all present it the same way...) The flows with multiple bottlenecks go from A to D, and vice versa. The single bottleneck flows go between E and F, F and G, and G and H. So there are three congested links, and four separate paths. All paths have the same 60 ms round-trip time (in the absence of queueing delay.) So I am assuming that you want the links B-F and C-D to each have a 30 ms. one-way delay, and for the other links to have 0 ms. delay. Or something like that. But for all links to have the same queue sizes and such. I would be happy for the paper to describe the symmetric case, and to note that the asymmetric case is roughly equivalent. (I wouldn't expect it to be *exactly* equivalent, because the timing of packets arriving at forward-path and reverse-path queues should be slightly different.) - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 24 Mar 2008 07:45:26 -0700 Subject: [Tmrg] Towards a Common TCP Evaluation Suite - traffic generator question In-Reply-To: <02cf01c88d7d$d82c4ef0$c44c1cac@ad.research.nec.com.cn> References: <02cf01c88d7d$d82c4ef0$c44c1cac@ad.research.nec.com.cn> Message-ID: Greetings, I've contacted the Tmix team asking for their measured traces, and hope to generate suitable short traces from them. If they don't reply soon, I'll try to generate some synthetic traffic (probably Poisson arrivals and Pareto file sizes). However, if we need to resort to that, I'd suggest that we go back to using Harpoon which actually exists for Linux. The minor increase in realism it allows will be lost if we have to use synthetic data anyway. The Full-TCP issue also applies to the main set of "general tests". One one hand, I think that the authors of a modification to TCP should be willing to write it for the Full-TCP module. On the other hand, this is another reason to go back to Harpoon, especially if people want to compare their enhancements to the existing enhancements. Cheers, Lachlan On 24/03/2008, Wang gang wrote: > Dear all, > > I have a question about using Tmix in the traffic generation in > ns2 simulaion. > > If we use Tmix as the traffic generator in the three node pairs > of the Dumb-Bell topology, we need to have the connection > vectors for the described load and packet size distributions > listed in the paper, is that right? Or who will provide them? > > And since Tmix is using Full-TCP, if we want to do > IV E. Impact on standard TCP traffic and > F. Intra-protocol fairness in the paper, we need the TCP variants > implemented using Full-TCP (They are not now). Is this > a problem? Correct me if I'm wrong. -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: lipeng967 at 163.com (lipeng967) Date: Tue, 25 Mar 2008 12:46:31 +0800 (CST) Subject: [Tmrg] request to join in the maillist Message-ID: <11167518.289281206420391567.JavaMail.coremail@bj163app93.163.com> Dear Professor, I am glad to join in the maillist .Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20080325/7d829849/attachment.html From: sallyfloyd at mac.com (Sally Floyd) Date: Tue, 25 Mar 2008 11:00:48 -0700 Subject: [Tmrg] RFC 5166: Metrics for the Evaluation of Congestion Control Mechanisms Message-ID: <1408FBE7-0544-4EFE-92EA-E269127AD4F9@mac.com> Just a note that "Metrics for the Evaluation of Congestion Control Mechanisms" has, after a slow process, appears as an Informational RFC. "http://www.ietf.org/rfc/rfc5166.txt" The next steps for TMRG are to finish the internet-draft on "Tools for the Evaluation of Simulation and Testbed Scenarios", and to see the completion of the TCP Evaluation Suite begun in the PFLDnet 2008 paper on "Towards a Common TCP Evaluation Suite". - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 1 Apr 2008 19:35:09 -0700 Subject: [Tmrg] Towards a Common TCP Evaluation Suite In-Reply-To: <15CDB731-42C9-430A-832A-B21A1824AE40@mac.com> References: <5D977804-59A9-4A06-8E71-4C88AF56FA50@csnet.cs.odu.edu> <56D7B140-63DE-4560-91E6-46C70F7A4583@mac.com> <19B51173-135B-4D82-BF98-086118F2599E@mac.com> <15CDB731-42C9-430A-832A-B21A1824AE40@mac.com> Message-ID: Greetings all, Let's get back onto the test suite... I propose the following text for the multi-hop test: The topology is a ``parking-lot'' topology with three (horizontal) bottleneck links and four (vertical) access links. The bottleneck links have a rate of 100\,Mbps, and the access links have a rate of 1\,Gbps. All flows have a round-trip time of 60\,ms. This can be achieved by all links having a one-way delay of 10\,ms. Alternatively, it may be achieved by (a) the second access link having a one-way delay of 30\,ms (b) the bottleneck link to which it does not connect having a one-way delay of 30\,ms and (c) all other links having negligible delay. (The latter configuration can be extended to more than three bottlenecks, by assigning a delay of 30\,ms to every alternate access link, and to zero or one of the bottleneck links.) Other points: - For the "satellite" link, why does the central (satellite) link have a symmetric bit rate, while the ground links are asymmetric? I'd suggest making the central link 40M/4M, and the ground links all symmetric, either at 40M or preferably at 100M or 1G. - The "dial-up" case uses 64kbit/s. Should we perhaps make that 56kbit/s in one direction and 48kbit/s in the other, which is the best available from a V.92 modem? - What format should we use for the Internet draft -- xml? nroff? I'm hoping to patch tmix to let it re-use the same input file for multiple different traffic loads, but that will take a bit of time... Cheers, Lachlan On 18/03/2008, Sally Floyd wrote: > > On Mar 18, 2008, at 10:12 AM, Lachlan Andrew wrote: > > > Greetings Sally, > > > > On 18/03/2008, Sally Floyd wrote: > >> > >> I think it makes more sense to specify the topology as it is > >> specified > >> now in Section G ("a "parking-lot" topology with three (horizontal) > >> bottleneck > >> links and four (vertical) access links"), and then to add that in a > >> testbed, this can be *implemented* with only N delay elements. > > > > I wasn't proposing a change to the arrangements of links, just the > > allocation of delays on those links. The description above describes > > both delay allocations equally well. The only distinction is whether > > all links have equal delay, or two have large delays and the rest have > > negligible delays. > > > > If the two delay arrangements are equivalent (as I believe they are) > > then I agree it is good to describe the symmetric one and note that > > the asymmetric one is equivalent. It would only become awkward if > > there is a useful metric for which they give different results. > > > My assumption was this topology: > > A ------ B ------ C ------ D > > with four access links, A-E, B-F, C-G, D-H. > (I didn't draw the access links, because I don't trust mail > readers to all present it the same way...) > > The flows with multiple bottlenecks go from A to D, and vice versa. > The single bottleneck flows go between E and F, F and G, and G and > H. So there are three congested links, and four separate paths. > All paths have the same 60 ms round-trip time (in the absence of > queueing delay.) > > So I am assuming that you want the links B-F and C-D to each have a > 30 ms. one-way delay, and for the other links to have 0 ms. delay. > Or something like that. But for all links to have the same > queue sizes and such. > > I would be happy for the paper to describe the symmetric case, > and to note that the asymmetric case is roughly equivalent. > (I wouldn't expect it to be *exactly* equivalent, because the > timing of packets arriving at forward-path and reverse-path > queues should be slightly different.) > > > - Sally > http://www.icir.org/floyd/ > > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: ldunn at cisco.com (Lawrence D. Dunn) Date: Wed, 2 Apr 2008 13:07:38 -0500 Subject: [Tmrg] Towards a Common TCP Evaluation Suite In-Reply-To: References: <5D977804-59A9-4A06-8E71-4C88AF56FA50@csnet.cs.odu.edu> <56D7B140-63DE-4560-91E6-46C70F7A4583@mac.com> <19B51173-135B-4D82-BF98-086118F2599E@mac.com> <15CDB731-42C9-430A-832A-B21A1824AE40@mac.com> Message-ID: Lachlan, At the Stanford Trainwreck workshop yesterday (missed you!), http://yuba.stanford.edu/trainwreck/agenda.html there was someone from NIST who indicated they were working on a for-the-good-of-the-community TCP test suite, or similar. There was brief discussion on whether NIST would come up with some sort of "compliance testing" (generally viewed as "bad" by the crowd), or some sort of standard comparison tests. Which sounds a lot like what we're doing, which I mentioned to the group... Anyway, would you be game to ping the NIST person, and see how much commonality/overlap there is, whether there'd be synergy in some collaboration, etc? The only NIST person on the attendee list is: Vladimir Marbukh, NIST, malones at nist.gov (note the email addr doesn't seem to match the person-name; maybe it's OK, or maybe it's an admin, not sure...) Thoughts from others on this list? I hadn't heard about the NIST work before, and one might argue that too close an interaction w/ NIST might compromise the not-nation-specific nature of our work, etc. Still, to reduce duplication and try for a more broadly applicable outcome, might be worth seeing what they're up to... Larry -- At 7:35 PM -0700 4/1/08, Lachlan Andrew wrote: >Greetings all, > >Let's get back onto the test suite... > >I propose the following text for the multi-hop test: > > The topology is a ``parking-lot'' topology with > three (horizontal) bottleneck links and four (vertical) access links. > The bottleneck links have a rate of 100\,Mbps, > and the access links have a rate of 1\,Gbps. > > All flows have a round-trip time of 60\,ms. > This can be achieved by all links having a one-way delay of 10\,ms. > Alternatively, it may be achieved by (a) the second access link >having a one-way > delay of 30\,ms (b) the bottleneck link to which it does not >connect having a > one-way delay of 30\,ms and (c) all other links having negligible delay. > (The latter configuration can be extended to more than three bottlenecks, by > assigning a delay of 30\,ms to every alternate access link, and to >zero or one > of the bottleneck links.) > >Other points: >- For the "satellite" link, why does the central (satellite) link have >a symmetric bit rate, while the ground links are asymmetric? I'd >suggest making the central link 40M/4M, and the ground links all >symmetric, either at 40M or preferably at 100M or 1G. > >- The "dial-up" case uses 64kbit/s. Should we perhaps make that >56kbit/s in one direction and 48kbit/s in the other, which is the >best available from a V.92 modem? > >- What format should we use for the Internet draft -- xml? nroff? > >I'm hoping to patch tmix to let it re-use the same input file for >multiple different traffic loads, but that will take a bit of time... > >Cheers, >Lachlan > >On 18/03/2008, Sally Floyd wrote: >> >> On Mar 18, 2008, at 10:12 AM, Lachlan Andrew wrote: >> >> > Greetings Sally, >> > >> > On 18/03/2008, Sally Floyd wrote: >> >> >> >> I think it makes more sense to specify the topology as it is >> >> specified >> >> now in Section G ("a "parking-lot" topology with three (horizontal) >> >> bottleneck >> >> links and four (vertical) access links"), and then to add that in a >> >> testbed, this can be *implemented* with only N delay elements. >> > >> > I wasn't proposing a change to the arrangements of links, just the >> > allocation of delays on those links. The description above describes >> > both delay allocations equally well. The only distinction is whether >> > all links have equal delay, or two have large delays and the rest have >> > negligible delays. >> > >> > If the two delay arrangements are equivalent (as I believe they are) >> > then I agree it is good to describe the symmetric one and note that >> > the asymmetric one is equivalent. It would only become awkward if > > > there is a useful metric for which they give different results. >> >> >> My assumption was this topology: >> >> A ------ B ------ C ------ D >> >> with four access links, A-E, B-F, C-G, D-H. >> (I didn't draw the access links, because I don't trust mail >> readers to all present it the same way...) >> >> The flows with multiple bottlenecks go from A to D, and vice versa. >> The single bottleneck flows go between E and F, F and G, and G and >> H. So there are three congested links, and four separate paths. >> All paths have the same 60 ms round-trip time (in the absence of >> queueing delay.) >> >> So I am assuming that you want the links B-F and C-D to each have a >> 30 ms. one-way delay, and for the other links to have 0 ms. delay. >> Or something like that. But for all links to have the same >> queue sizes and such. >> >> I would be happy for the paper to describe the symmetric case, >> and to note that the asymmetric case is roughly equivalent. >> (I wouldn't expect it to be *exactly* equivalent, because the >> timing of packets arriving at forward-path and reverse-path >> queues should be slightly different.) >> >> >> - Sally >> http://www.icir.org/floyd/ >> >> > > >-- >Lachlan Andrew Dept of Computer Science, Caltech >1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >http://netlab.caltech.edu/lachlan >_______________________________________________ >Tmrg-interest mailing list >Tmrg-interest at ICSI.Berkeley.EDU >http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 2 Apr 2008 13:23:31 -0800 Subject: [Tmrg] Towards a Common TCP Evaluation Suite In-Reply-To: References: <5D977804-59A9-4A06-8E71-4C88AF56FA50@csnet.cs.odu.edu> <56D7B140-63DE-4560-91E6-46C70F7A4583@mac.com> <19B51173-135B-4D82-BF98-086118F2599E@mac.com> <15CDB731-42C9-430A-832A-B21A1824AE40@mac.com> Message-ID: Thanks Larry. I would have like to go, but didn't get my act together... Thanks for the pointer. It would certainly be good to cooperate. Cheers, Lachlan On 02/04/2008, Lawrence D. Dunn wrote: > Lachlan, > At the Stanford Trainwreck workshop yesterday (missed you!), > http://yuba.stanford.edu/trainwreck/agenda.html > > there was someone from NIST who indicated they were working > on a for-the-good-of-the-community TCP test suite, or similar. > There was brief discussion on whether NIST would come up > with some sort of "compliance testing" (generally viewed as "bad" > by the crowd), > or some sort of standard comparison tests. > Which sounds a lot like what we're doing, which I mentioned to the > group... > > Anyway, would you be game to ping the NIST person, > and see how much commonality/overlap there is, > whether there'd be synergy in some collaboration, etc? > > The only NIST person on the attendee list is: > Vladimir Marbukh, NIST, malones at nist.gov > (note the email addr doesn't seem to match the person-name; > maybe it's OK, or maybe it's an admin, not sure...) > > Thoughts from others on this list? I hadn't heard about the NIST > work before, and one might argue that too close an interaction > w/ NIST might compromise the not-nation-specific nature of > our work, etc. Still, to reduce duplication and try for a > more broadly applicable outcome, might be worth seeing what > they're up to... > > Larry > -- > > > At 7:35 PM -0700 4/1/08, Lachlan Andrew wrote: > > > > > Greetings all, > > > > Let's get back onto the test suite... > > > > I propose the following text for the multi-hop test: > > > > The topology is a ``parking-lot'' topology with > > three (horizontal) bottleneck links and four (vertical) access links. > > The bottleneck links have a rate of 100\,Mbps, > > and the access links have a rate of 1\,Gbps. > > > > All flows have a round-trip time of 60\,ms. > > This can be achieved by all links having a one-way delay of 10\,ms. > > Alternatively, it may be achieved by (a) the second access link > > having a one-way > > delay of 30\,ms (b) the bottleneck link to which it does not connect > having a > > one-way delay of 30\,ms and (c) all other links having negligible delay. > > (The latter configuration can be extended to more than three bottlenecks, > by > > assigning a delay of 30\,ms to every alternate access link, and to zero > or one > > of the bottleneck links.) > > > > Other points: > > - For the "satellite" link, why does the central (satellite) link have > > a symmetric bit rate, while the ground links are asymmetric? I'd > > suggest making the central link 40M/4M, and the ground links all > > symmetric, either at 40M or preferably at 100M or 1G. > > > > - The "dial-up" case uses 64kbit/s. Should we perhaps make that > > 56kbit/s in one direction and 48kbit/s in the other, which is the > > best available from a V.92 modem? > > > > - What format should we use for the Internet draft -- xml? nroff? > > > > I'm hoping to patch tmix to let it re-use the same input file for > > multiple different traffic loads, but that will take a bit of time... > > > > Cheers, > > Lachlan > > > > On 18/03/2008, Sally Floyd wrote: > > > > > > > > On Mar 18, 2008, at 10:12 AM, Lachlan Andrew wrote: > > > > > > > Greetings Sally, > > > > > > > > On 18/03/2008, Sally Floyd wrote: > > > >> > > > >> I think it makes more sense to specify the topology as it is > > > >> specified > > > >> now in Section G ("a "parking-lot" topology with three (horizontal) > > > >> bottleneck > > > >> links and four (vertical) access links"), and then to add that in a > > > >> testbed, this can be *implemented* with only N delay elements. > > > > > > > > I wasn't proposing a change to the arrangements of links, just the > > > > allocation of delays on those links. The description above describes > > > > both delay allocations equally well. The only distinction is whether > > > > all links have equal delay, or two have large delays and the rest > have > > > > negligible delays. > > > > > > > > If the two delay arrangements are equivalent (as I believe they are) > > > > then I agree it is good to describe the symmetric one and note that > > > > the asymmetric one is equivalent. It would only become awkward if > > > > > > > there is a useful metric for which they give different results. > > > > > > > > > > > My assumption was this topology: > > > > > > A ------ B ------ C ------ D > > > > > > with four access links, A-E, B-F, C-G, D-H. > > > (I didn't draw the access links, because I don't trust mail > > > readers to all present it the same way...) > > > > > > The flows with multiple bottlenecks go from A to D, and vice versa. > > > The single bottleneck flows go between E and F, F and G, and G and > > > H. So there are three congested links, and four separate paths. > > > All paths have the same 60 ms round-trip time (in the absence of > > > queueing delay.) > > > > > > So I am assuming that you want the links B-F and C-D to each have a > > > 30 ms. one-way delay, and for the other links to have 0 ms. delay. > > > Or something like that. But for all links to have the same > > > queue sizes and such. > > > > > > I would be happy for the paper to describe the symmetric case, > > > and to note that the asymmetric case is roughly equivalent. > > > (I wouldn't expect it to be *exactly* equivalent, because the > > > timing of packets arriving at forward-path and reverse-path > > > queues should be slightly different.) > > > > > > > > > - Sally > > > http://www.icir.org/floyd/ > > > > > > > > > > > > > > > -- > > Lachlan Andrew Dept of Computer Science, Caltech > > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > > http://netlab.caltech.edu/lachlan > > _______________________________________________ > > Tmrg-interest mailing list > > Tmrg-interest at ICSI.Berkeley.EDU > > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: garmitage at swin.edu.au (grenville armitage) Date: Thu, 03 Apr 2008 09:03:03 +1100 Subject: [Tmrg] Towards a Common TCP Evaluation Suite In-Reply-To: References: <5D977804-59A9-4A06-8E71-4C88AF56FA50@csnet.cs.odu.edu> <56D7B140-63DE-4560-91E6-46C70F7A4583@mac.com> <19B51173-135B-4D82-BF98-086118F2599E@mac.com> <15CDB731-42C9-430A-832A-B21A1824AE40@mac.com> Message-ID: <47F40297.3030503@swin.edu.au> Hi Lachlan, Lachlan Andrew wrote: [..] > - The "dial-up" case uses 64kbit/s. Should we perhaps make that > 56kbit/s in one direction and 48kbit/s in the other, which is the > best available from a V.92 modem? I'd prefer to see ~52kbit/sec down and 33kbit/sec up for the "dial-up" case, to model the (IMHO more likely) V.90 case rather than V.92. (Also I note a comment on http://en.wikipedia.org/wiki/V.92#V.92 that V.92's upstream 48kbit/sec only occurs if you're willing take a hit on the downstream rate... although, I'm taking the easy way out here by relying on a wikipedia article. Perhaps someone else can clarify the relevance / market penetration of V.92 today?) cheers, gja From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 6 Apr 2008 10:48:17 -0700 Subject: [Tmrg] TCP Evaluation Suite: Traffic Generator In-Reply-To: References: <200804040901.05989.ralfluebben@gmx.de> Message-ID: Greetings Ralf, On 04/04/2008, Ralf L?bben wrote: > > I really like the idea of the TCP Evaluation Suite because the standardized > testing and the predefined scenarios. > > I read in the paper from PFLDnet2008, that you still search researchers for > the traffic generator. > > As I understand at the moment you will use Tmix, which replays application > workload based on packet traces independent from several applications. > > What is there still a need for? > > A distribution for this application independent workload? > > or > > A detail analysis of the influence of the user behavior (e.g., think times > etc.) These are traffic measurement issues, rather than traffic generator issues. We need both of those, and also models for the influence *on* user behaviour -- that is, how congestion affects user behaviour rather than vice versa. Do users give up and leave, or do they abort connections and retry, or do they just sit patiently and leave their load unchanged? Do think times increase, because users go away and do something else, or decrease because the user has already read the text of a page by the time the images have finished loading? Related questions are: - How do traffic parameters change during congestion? Normally the inter-flow times will be reduced, but are the file sizes higher or lower, and more or less heavy tailed? - How does mean file size vary with link capacity? For example, people wanting to transfer very large files won't do so on a very low-speed link. As far as traffic generators themselves go, we'll probably stick with Tmix and enhance it to meet our needs. For example, we want to be able to specify the MSS of a flow, which I don't think it can currently do. We also want one sender to be able to send to multiple receivers. Do you mind if I forward this mail to the TMRG mailing list? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 9 Apr 2008 20:03:21 -0700 Subject: [Tmrg] Traffic trace options Message-ID: Greetings all, A few people have asked for the traffic traces for the test suite. How does this suggestion sound: UNC have provided us with two traces. Both are bidirectional, specify the RTT of each flow, and are about one hour. One is for sessions started by a host inside their campus and one for sessions started outside. For simplicity, I propose using only the larger of these two. I've been told that Tmix can scale traffic loads, so that should do for all load cases. We need 9 separate traces for the 9 source/destination pairs, plus another 9 in the reverse direction. I propose subsampling the trace to do that. (That should maintain the correct correlation structure, while overlaying 9 time-delayed copies of the same trace would reduce burstiness.) To do the subsampling, I propose sorting the sessions by RTT, and allocating the top 11% to the S/D pair with the longest RTT etc. I also propose using a cyclic permutation of the trace, so that the first 100s or so has the same load as the average over the entire trace, to try to minimise the bias introduced by running very short experiments. Feedback on these proposals would be most welcome (or ask me to explain it more clearly). If no-one has any objections or suggestions, Tom Q will post the 9 resulting traces on the web sometime next week. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Wed, 16 Apr 2008 15:05:17 -0700 Subject: [Tmrg] Towards a Common TCP Evaluation Suite - traffic generator question In-Reply-To: References: <02cf01c88d7d$d82c4ef0$c44c1cac@ad.research.nec.com.cn> Message-ID: Lachlan - (Sorry, this is Sally finally getting to old March 24 email...) > The Full-TCP issue also applies to the main set of "general tests". > One one hand, I think that the authors of a modification to TCP should > be willing to write it for the Full-TCP module. On the other hand, > this is another reason to go back to Harpoon, especially if people > want to compare their enhancements to the existing enhancements. For the tests using the ns-2 simulator, I think that tests that were limited to use with the Full-TCP module wouldn't be very useful to the general research community. Just FYI. (1) Full-TCP has never been validated as fully as the one-way TCP in ns-2. The following very old note on the ns-2 web page at "http://www.isi.edu/nsnam/ns/ns-limitations.html" is largely still valid: "Limitations to FullTCP: There is not a complete validation test suite for FullTCP." If anyone ever wanted to extend the validation tests in ns-2 for FullTCP, that would be great. (2) I would note that I am not funded as a support person for ns-2 - I work on ns-2 because I use it in my own research. I add validation tests to ns-2 for one-way TCP because I use one-way TCP in my own research. I add functionality to one-way TCP generally because that is what I have needed for my own research. There is a very great deal of functionality, from me and from many other researchers, that is available on one-way TCP but not on Full-TCP in ns-2. There is a very limited functionality that is available on Full-TCP. - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 16 Apr 2008 16:20:22 -0700 Subject: [Tmrg] Towards a Common TCP Evaluation Suite - traffic generator question In-Reply-To: References: <02cf01c88d7d$d82c4ef0$c44c1cac@ad.research.nec.com.cn> Message-ID: Thanks Sally. Limited testing of Full-TCP is a good point. Michele has said that they are already working on extending Tmix, so the issue should soon be solved. Cheers, Lachlan On 16/04/2008, Sally Floyd wrote: > Lachlan - > > (Sorry, this is Sally finally getting to old March 24 email...) > > > > The Full-TCP issue also applies to the main set of "general tests". > > One one hand, I think that the authors of a modification to TCP should > > be willing to write it for the Full-TCP module. On the other hand, > > this is another reason to go back to Harpoon, especially if people > > want to compare their enhancements to the existing enhancements. > > > > > For the tests using the ns-2 simulator, I think that tests that were > limited to use with the Full-TCP module wouldn't be very useful > to the general research community. Just FYI. > > > (1) Full-TCP has never been validated as fully as the one-way > TCP in ns-2. The following very old note on the ns-2 web page > at "http://www.isi.edu/nsnam/ns/ns-limitations.html" is > largely > still valid: > > "Limitations to FullTCP: There is not a complete validation test suite > for FullTCP." > > If anyone ever wanted to extend the validation tests in ns-2 for > FullTCP, that would be great. > > > (2) I would note that I am not funded as a support person for ns-2 - > I work on ns-2 because I use it in my own research. I add validation > tests to ns-2 for one-way TCP because I use one-way TCP in my own > research. I add functionality to one-way TCP generally because > that is what I have needed for my own research. > > There is a very great deal of functionality, from me and from many > other researchers, that is available on one-way TCP but not on > Full-TCP in ns-2. There is a very limited functionality that is available > on Full-TCP. > > > - Sally > http://www.icir.org/floyd/ > > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: sallyfloyd at mac.com (Sally Floyd) Date: Wed, 16 Apr 2008 18:16:18 -0700 Subject: [Tmrg] Towards a Common TCP Evaluation Suite - traffic generator question In-Reply-To: References: <02cf01c88d7d$d82c4ef0$c44c1cac@ad.research.nec.com.cn> Message-ID: <7AA549C7-E8B6-440D-A1DB-CC70EBCD499A@mac.com> Lachlan - > Thanks Sally. Limited testing of Full-TCP is a good point. > > Michele has said that they are already working on extending Tmix, so > the issue should soon be solved. Great, thanks! - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Fri, 18 Apr 2008 19:39:32 -0700 Subject: [Tmrg] Test suite: Transfer time vs file size Message-ID: Greetings all, For the general scenarios, we specify one metric as the "transfer time per flow versus file size". How should we define that, given that the flows are mostly non-greedy? Options include: 1. Treat each "application data unit" as a single flow. That would mean the Tmix would have to record the statistics for us 2. Treach each entire connection as a single flow. That would seem to give meaningless values. 3. Add some artificial greedy flows and only record statistics from them. My favourite option is 1. What do others think? BTW, I know that lots of people read this list, but very few seem to post. It would be great to have more interaction. In particular, this test suite will only be successful if it represents a community consensus, and hence if people are willing to use it. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: hamed at ee.ucl.ac.uk (Hamed Haddadi) Date: Sat, 19 Apr 2008 13:34:36 +0930 Subject: [Tmrg] Test suite: Transfer time vs file size In-Reply-To: References: Message-ID: <9d1bec780804182104o5c52576alc321282d195dd7ef@mail.gmail.com> Greetings all, I am not on TMRG but I folow these emailas they are really interesting, thought I just mention the fact that one tricky issue is the actual definition of flows in this context, weather a 2-day emule download or bittorrent session is one flow with constant up-down times, or numerous bursty individual flows, as opposed to a radio streaming that is active for days with constant bit rate, and also, is a gmail/msn chat etc flow which is active for many days without much activity is just considered one long flow with not much data on it. Internet is moving away from traditional heavy-tailed/self-similar nature and classic flow definition and sampling methods may not be representetive for future cheers hamed 2008/4/19 Lachlan Andrew : > Greetings all, > > For the general scenarios, we specify one metric as the "transfer time > per flow versus file size". > How should we define that, given that the flows are mostly non-greedy? > Options include: > > 1. Treat each "application data unit" as a single flow. That would > mean the Tmix would have to record the statistics for us > 2. Treach each entire connection as a single flow. That would seem to > give meaningless values. > 3. Add some artificial greedy flows and only record statistics from them. > > My favourite option is 1. What do others think? > > > BTW, I know that lots of people read this list, but very few seem to > post. It would be great to have more interaction. In particular, > this test suite will only be successful if it represents a community > consensus, and hence if people are willing to use it. > > Cheers, > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > http://netlab.caltech.edu/lachlan > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > > -- ================================================= hamed at ee.ucl.ac.uk , http://www.ee.ucl.ac.uk/~hamed Currently at School of Mathematical Sciences, University of Adelaide From: nfonseca at ic.unicamp.br (nfonseca at ic.unicamp.br) Date: Sat, 19 Apr 2008 16:40:48 -0300 (BRT) Subject: [Tmrg] Tmrg-interest Digest, Vol 19, Issue 6 In-Reply-To: References: Message-ID: <3833.143.106.7.122.1208634048.squirrel@webmail.ic.unicamp.br> Dear Lachlan, could you please specify what you mean by "application data unit"? I understand that tranfer time per flow is the time elapsed between the arrival of the first bit at the destination and the arrival of the last bit at the destination of a flow (during its lifetime) why you consider option 2 meaningless? Thanks for the clarification and sorry if I am out of phase (you asked for interaction) nelson fonseca > Send Tmrg-interest mailing list submissions to > tmrg-interest at ICSI.Berkeley.EDU > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > or, via email, send a message with subject or body 'help' to > tmrg-interest-request at ICSI.Berkeley.EDU > > You can reach the person managing the list at > tmrg-interest-owner at ICSI.Berkeley.EDU > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Tmrg-interest digest..." > > > Today's Topics: > > 1. Test suite: Transfer time vs file size (Lachlan Andrew) > 2. Re: Test suite: Transfer time vs file size (Hamed Haddadi) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 Apr 2008 19:39:32 -0700 > From: "Lachlan Andrew" > Subject: [Tmrg] Test suite: Transfer time vs file size > To: tmrg > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Greetings all, > > For the general scenarios, we specify one metric as the "transfer time > per flow versus file size". > How should we define that, given that the flows are mostly non-greedy? > Options include: > > 1. Treat each "application data unit" as a single flow. That would > mean the Tmix would have to record the statistics for us > 2. Treach each entire connection as a single flow. That would seem to > give meaningless values. > 3. Add some artificial greedy flows and only record statistics from them. > > My favourite option is 1. What do others think? > > > BTW, I know that lots of people read this list, but very few seem to > post. It would be great to have more interaction. In particular, > this test suite will only be successful if it represents a community > consensus, and hence if people are willing to use it. > > Cheers, > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > http://netlab.caltech.edu/lachlan > > > ------------------------------ > > Message: 2 > Date: Sat, 19 Apr 2008 13:34:36 +0930 > From: "Hamed Haddadi" > Subject: Re: [Tmrg] Test suite: Transfer time vs file size > To: "Lachlan Andrew" > Cc: tmrg > Message-ID: > <9d1bec780804182104o5c52576alc321282d195dd7ef at mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1 > > Greetings all, > I am not on TMRG but I folow these emailas they are really interesting, > > thought I just mention the fact that one tricky issue is the actual > definition of flows in this context, weather a 2-day emule download or > bittorrent session is one flow with constant up-down times, or > numerous bursty individual flows, as opposed to a radio streaming that > is active for days with constant bit rate, and also, is a gmail/msn > chat etc flow which is active for many days without much activity is > just considered one long flow with not much data on it. > > Internet is moving away from traditional heavy-tailed/self-similar > nature and classic flow definition and sampling methods may not be > representetive for future > > cheers > hamed > > 2008/4/19 Lachlan Andrew : >> Greetings all, >> >> For the general scenarios, we specify one metric as the "transfer time >> per flow versus file size". >> How should we define that, given that the flows are mostly non-greedy? >> Options include: >> >> 1. Treat each "application data unit" as a single flow. That would >> mean the Tmix would have to record the statistics for us >> 2. Treach each entire connection as a single flow. That would seem to >> give meaningless values. >> 3. Add some artificial greedy flows and only record statistics from >> them. >> >> My favourite option is 1. What do others think? >> >> >> BTW, I know that lots of people read this list, but very few seem to >> post. It would be great to have more interaction. In particular, >> this test suite will only be successful if it represents a community >> consensus, and hence if people are willing to use it. >> >> Cheers, >> Lachlan >> >> -- >> Lachlan Andrew Dept of Computer Science, Caltech >> 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >> Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >> http://netlab.caltech.edu/lachlan >> _______________________________________________ >> Tmrg-interest mailing list >> Tmrg-interest at ICSI.Berkeley.EDU >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest >> >> -- >> This message has been scanned for viruses and >> dangerous content by MailScanner, and is >> believed to be clean. >> >> > > > > -- > ================================================= > hamed at ee.ucl.ac.uk , http://www.ee.ucl.ac.uk/~hamed > Currently at School of Mathematical Sciences, University of Adelaide > > > ------------------------------ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > End of Tmrg-interest Digest, Vol 19, Issue 6 > ******************************************** > From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 20 Apr 2008 22:09:50 -0700 Subject: [Tmrg] Test suite: Transfer time vs file size Message-ID: Greetings, On 19/04/2008, nfonseca at ic.unicamp.br wrote: > > could you please specify what you mean by "application data unit"? I meant a burst of traffic whose rate is governed by TCP -- I think that is what the Tmix team mean by the term. A TCP connection consists essentially of a sequence of ADUs separated by idle times when the application has no data to send. > I understand that tranfer time per flow is the time elapsed between the > arrival of the first bit at the destination and the arrival of the last > bit at the destination of a flow (during its lifetime) > > why you consider option 2 meaningless? It is meaningless for non-greedy flows, because the elapsed time is often dominated by times when the application has no data to send, rather than by TCP. This is like Hamed's example of a chat session. If a chat session is a single long-lived TCP connection, then we don't care that we only get a fraction of a bit per second, as long as each burst that we send gets sent quickly. Option 2 makes sense if the flow is *greedy*, so that TCP determines the total transfer time (that is, the flow consists of a single application data unit). In that case options 1 and 2 are identical. > Thanks for the clarification and sorry if I am out of phase (you asked for > interaction) Yes, thanks for the interaction. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: rchertov at purdue.edu (Roman Chertov) Date: Mon, 21 Apr 2008 14:12:13 -0400 Subject: [Tmrg] router buffer sizes Message-ID: <480CD8FD.20509@purdue.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello, I am posting this email as per my conversation with Lachlan at INFOCOM. In our INFOCOM, "A Device Independent Router Model", paper and in our current work that extends that paper, we used empirical methods to ascertain the size of queues in various commercial routers. In the four commercial routers that we have experimented with, we have observed the following queue configurations: Byte Based Slot Based Separate buffers for various packet ranges (small, medium, large) Furthermore, the delay due to queuing ranged from 14 ms to almost 400 ms. In a variety of TCP papers that I have read, people used either default queue sizes in ns-2 (50 slots), or they used some arbitrary queue size. I think it will be useful for the future version of ns-3 to provide a variety of router "groups". Each group would be a collection of commercial routers which have similar characteristics, and then some representative for the group can be chosen. For instance, if an experimenter creates a routing node with many 1+ Gbps links, then choosing a queue size of 200 is not very representative of the real world. However, for such a scenario, choosing a group "backbone" will configure routing node to have fairly large queues of 16K+ packet slots, hence making the test more representative of a real network. The major problem with the proposed approach is the difficulty of obtaining the data. Sometimes the data is available on a company's website, in other cases, empirical methods are required to get the needed information. The major drawback of an empirical approach is the need for physical hardware. However, even with the drawbacks, I think it is worthwhile to do this, as then simulation accuracy should increase. Our INFOCOM paper is available at: http://www.cs.purdue.edu/homes/fahmy/papers/infocom08-roman.pdf Roman Chertov -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFIDNj9T8ksiSCF2AYRAggkAJ9kErv4P61YvE3rmKbiJECIgtoLBwCdGLEL W2SGc2YqjPQg07OV9fNjrbA= =8mxv -----END PGP SIGNATURE----- From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 28 Apr 2008 08:24:13 -0700 Subject: [Tmrg] Test suite: Transfer time vs file size In-Reply-To: References: Message-ID: Lachlan - On Apr 18, 2008, at 7:39 PM, Lachlan Andrew wrote: > For the general scenarios, we specify one metric as the "transfer time > per flow versus file size". > How should we define that, given that the flows are mostly non-greedy? > Options include: > > 1. Treat each "application data unit" as a single flow. That would > mean the Tmix would have to record the statistics for us > 2. Treach each entire connection as a single flow. That would seem to > give meaningless values. > 3. Add some artificial greedy flows and only record statistics from > them. > > My favourite option is 1. What do others think? That makes sense to me. (Replying late...) - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Fri, 2 May 2008 14:06:33 -0700 Subject: [Tmrg] test suite: preliminary traces Message-ID: Greetings all, We finally have some traces which I submit as a first draft for use in the TCP evaluation suite. They're available at . Our aims were: - The traces should preserve the correlation structure of the original traffic traces kindly provided by UNC. - Dependence of file size distribution on RTTs should be preserved, as much as possible. - Short (100s) experiments should see similar mean load as full 1hr experiements. - For the NIST people: Very long simulations can be run without the aggregate traffic becoming perfectly periodic. Please read the description of how the traces were generated/massaged, and comment or criticise :) Wang Gang, is your NS2 framework well developed enough to check what per-packet RTT distribution is actually achieved when this traffic is used on the dumbbell topology with the delays we discussed? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: wanggang at research.nec.com.cn (Wang gang) Date: Sun, 4 May 2008 09:28:01 +0800 Subject: [Tmrg] Tmrg-interest Digest, Vol 20, Issue 1 References: Message-ID: <012801c8ad86$17e4e980$c44c1cac@ad.research.nec.com.cn> Lachlan, I will check it. Thank you for your work. ---------------------------------------- Wang Gang NEC Labs, China 010-62705180 (ext.511) wanggang at research.nec.com.cn ----- Original Message ----- From: To: Sent: Sunday, May 04, 2008 3:00 AM Subject: Tmrg-interest Digest, Vol 20, Issue 1 > Send Tmrg-interest mailing list submissions to > tmrg-interest at ICSI.Berkeley.EDU > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > or, via email, send a message with subject or body 'help' to > tmrg-interest-request at ICSI.Berkeley.EDU > > You can reach the person managing the list at > tmrg-interest-owner at ICSI.Berkeley.EDU > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Tmrg-interest digest..." > > > Today's Topics: > > 1. test suite: preliminary traces (Lachlan Andrew) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 2 May 2008 14:06:33 -0700 > X-Virus: 7 > From: "Lachlan Andrew" > Subject: [Tmrg] test suite: preliminary traces > To: "Preethi Natarajan" , tmrg > > Message-ID: > > Content-Type: text/plain; charset=ISO-8859-1 > > Greetings all, > > We finally have some traces which I submit as a first draft for use in > the TCP evaluation suite. They're available at > . > > Our aims were: > - The traces should preserve the correlation structure of the original > traffic traces kindly provided by UNC. > - Dependence of file size distribution on RTTs should be preserved, as > much as possible. > - Short (100s) experiments should see similar mean load as full 1hr > experiements. > - For the NIST people: Very long simulations can be run without the > aggregate traffic becoming perfectly periodic. > > Please read the description of how the traces were generated/massaged, > and comment or criticise :) > > > Wang Gang, is your NS2 framework well developed enough to check what > per-packet RTT distribution is actually achieved when this traffic is > used on the dumbbell topology with the delays we discussed? > > Cheers, > Lachlan > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > http://netlab.caltech.edu/lachlan > > > ------------------------------ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > End of Tmrg-interest Digest, Vol 20, Issue 1 > ******************************************** > From: ralfluebben at gmx.de (Ralf =?iso-8859-1?q?L=FCbben?=) Date: Tue, 10 Jun 2008 08:12:28 +0200 Subject: [Tmrg] Availability of the Evaluation Suite/TMix? Message-ID: <200806100812.28885.ralfluebben@gmx.de> Hello all, I am looking forward to use the Evaluation Suite and in particular TMix for analyzing and for parametric modeling of network traffic. I hope TMix is a good starting point for that. Are the tools already available or is there a realease schedule? Thanks a lot. Cheers, Ralf From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Tue, 10 Jun 2008 13:01:06 -0700 Subject: [Tmrg] Availability of the Evaluation Suite/TMix? In-Reply-To: <200806100812.28885.ralfluebben@gmx.de> References: <200806100812.28885.ralfluebben@gmx.de> Message-ID: Greetings Ralf, Thanks for your interest, and your patience with the slow progress recently. Caltech's testbed isn't yet ready, and I don't think Wang Gang's simulation suite is ready yet either. We don't have a release schedule, but we're looking at having something ready for beta testing in a couple of months. The specs will change once we start comparing the suite results for standard TCP with traffic measurements. Cheers, Lachlan 2008/6/9 Ralf L?bben : > Hello all, > > I am looking forward to use the Evaluation Suite and in particular TMix for > analyzing and for parametric modeling of network traffic. > I hope TMix is a good starting point for that. > > Are the tools already available or is there a realease schedule? > > Thanks a lot. > > Cheers, > Ralf > > > > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: weixl at caltech.edu (Xiaoliang "David" Wei) Date: Tue, 10 Jun 2008 14:48:34 -0700 Subject: [Tmrg] Availability of the Evaluation Suite/TMix? In-Reply-To: References: <200806100812.28885.ralfluebben@gmx.de> Message-ID: <7335583a0806101448j376e1a6bw90e325ec3941b2a@mail.gmail.com> UCLA seems to have a evaluation suite http://netlab.cs.ucla.edu/tcpsuite/ I didn't get a time to try it out yet though. -David On Tue, Jun 10, 2008 at 1:01 PM, Lachlan Andrew wrote: > Greetings Ralf, > > Thanks for your interest, and your patience with the slow progress recently. > Caltech's testbed isn't yet ready, and I don't think Wang Gang's > simulation suite is ready yet either. > > We don't have a release schedule, but we're looking at having > something ready for beta testing in a couple of months. The specs > will change once we start comparing the suite results for standard TCP > with traffic measurements. > > Cheers, > Lachlan > > 2008/6/9 Ralf L?bben : >> Hello all, >> >> I am looking forward to use the Evaluation Suite and in particular TMix for >> analyzing and for parametric modeling of network traffic. >> I hope TMix is a good starting point for that. >> >> Are the tools already available or is there a realease schedule? >> >> Thanks a lot. >> >> Cheers, >> Ralf >> >> >> >> >> _______________________________________________ >> Tmrg-interest mailing list >> Tmrg-interest at ICSI.Berkeley.EDU >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest >> > > > > -- > Lachlan Andrew Dept of Computer Science, Caltech > 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > http://netlab.caltech.edu/lachlan > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > -- Xiaoliang "David" Wei http://davidwei.org *********************************************** From: cesar at cs.ucla.edu (Cesar Marcondes) Date: Tue, 10 Jun 2008 19:39:46 -0300 Subject: [Tmrg] Availability of the Evaluation Suite/TMix? In-Reply-To: <7335583a0806101448j376e1a6bw90e325ec3941b2a@mail.gmail.com> References: <200806100812.28885.ralfluebben@gmx.de> <7335583a0806101448j376e1a6bw90e325ec3941b2a@mail.gmail.com> Message-ID: <88d780b40806101539u1c2a797wc5aa2afe3c730aa0@mail.gmail.com> Dear David, Thanks for pointing out the UCLA TCP evaluation suite. However, this suite is not as complete as the one described in the PFLNet'08 paper. Even though, if you try out, let me know if you have problems since I'm maintaining the tool. Best regards, Cesar Marcondes On Tue, Jun 10, 2008 at 6:48 PM, Xiaoliang David Wei wrote: > UCLA seems to have a evaluation suite http://netlab.cs.ucla.edu/tcpsuite/ > I didn't get a time to try it out yet though. > > -David > > On Tue, Jun 10, 2008 at 1:01 PM, Lachlan Andrew > wrote: >> Greetings Ralf, >> >> Thanks for your interest, and your patience with the slow progress recently. >> Caltech's testbed isn't yet ready, and I don't think Wang Gang's >> simulation suite is ready yet either. >> >> We don't have a release schedule, but we're looking at having >> something ready for beta testing in a couple of months. The specs >> will change once we start comparing the suite results for standard TCP >> with traffic measurements. >> >> Cheers, >> Lachlan >> >> 2008/6/9 Ralf L?bben : >>> Hello all, >>> >>> I am looking forward to use the Evaluation Suite and in particular TMix for >>> analyzing and for parametric modeling of network traffic. >>> I hope TMix is a good starting point for that. >>> >>> Are the tools already available or is there a realease schedule? >>> >>> Thanks a lot. >>> >>> Cheers, >>> Ralf >>> >>> >>> >>> >>> _______________________________________________ >>> Tmrg-interest mailing list >>> Tmrg-interest at ICSI.Berkeley.EDU >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest >>> >> >> >> >> -- >> Lachlan Andrew Dept of Computer Science, Caltech >> 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA >> Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 >> http://netlab.caltech.edu/lachlan >> >> _______________________________________________ >> Tmrg-interest mailing list >> Tmrg-interest at ICSI.Berkeley.EDU >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest >> >> > > > > -- > Xiaoliang "David" Wei > http://davidwei.org > *********************************************** > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > From: ralfluebben at gmx.de (Ralf =?iso-8859-1?q?L=FCbben?=) Date: Wed, 11 Jun 2008 08:13:34 +0200 Subject: [Tmrg] Availability of the Evaluation Suite/TMix? In-Reply-To: <88d780b40806101539u1c2a797wc5aa2afe3c730aa0@mail.gmail.com> References: <200806100812.28885.ralfluebben@gmx.de> <7335583a0806101448j376e1a6bw90e325ec3941b2a@mail.gmail.com> <88d780b40806101539u1c2a797wc5aa2afe3c730aa0@mail.gmail.com> Message-ID: <200806110813.35114.ralfluebben@gmx.de> Greetings all, thanks a lot for the quick responses. At the moment I mostly interested in the TMix tool for some further traffic modeling. Does anyone know the actual status of TMix? Cheers, Ralf Am Mittwoch 11 Juni 2008 00:39:46 schrieb Cesar Marcondes: > Dear David, > > Thanks for pointing out the UCLA TCP evaluation suite. > However, this suite is not as complete as the one described in the > PFLNet'08 paper. > Even though, if you try out, let me know if you have problems since > I'm maintaining the tool. > > Best regards, > Cesar Marcondes > > On Tue, Jun 10, 2008 at 6:48 PM, Xiaoliang David Wei wrote: > > UCLA seems to have a evaluation suite http://netlab.cs.ucla.edu/tcpsuite/ > > I didn't get a time to try it out yet though. > > > > -David > > > > On Tue, Jun 10, 2008 at 1:01 PM, Lachlan Andrew > > > > wrote: > >> Greetings Ralf, > >> > >> Thanks for your interest, and your patience with the slow progress > >> recently. Caltech's testbed isn't yet ready, and I don't think Wang > >> Gang's simulation suite is ready yet either. > >> > >> We don't have a release schedule, but we're looking at having > >> something ready for beta testing in a couple of months. The specs > >> will change once we start comparing the suite results for standard TCP > >> with traffic measurements. > >> > >> Cheers, > >> Lachlan > >> > >> 2008/6/9 Ralf L?bben : > >>> Hello all, > >>> > >>> I am looking forward to use the Evaluation Suite and in particular TMix > >>> for analyzing and for parametric modeling of network traffic. > >>> I hope TMix is a good starting point for that. > >>> > >>> Are the tools already available or is there a realease schedule? > >>> > >>> Thanks a lot. > >>> > >>> Cheers, > >>> Ralf > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> Tmrg-interest mailing list > >>> Tmrg-interest at ICSI.Berkeley.EDU > >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > >> > >> -- > >> Lachlan Andrew Dept of Computer Science, Caltech > >> 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA > >> Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 > >> http://netlab.caltech.edu/lachlan > >> > >> _______________________________________________ > >> Tmrg-interest mailing list > >> Tmrg-interest at ICSI.Berkeley.EDU > >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > > -- > > Xiaoliang "David" Wei > > http://davidwei.org > > *********************************************** > > > > _______________________________________________ > > Tmrg-interest mailing list > > Tmrg-interest at ICSI.Berkeley.EDU > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 11 Jun 2008 00:33:41 -0700 Subject: [Tmrg] Availability of the Evaluation Suite/TMix? In-Reply-To: <200806110813.35114.ralfluebben@gmx.de> References: <200806100812.28885.ralfluebben@gmx.de> <7335583a0806101448j376e1a6bw90e325ec3941b2a@mail.gmail.com> <88d780b40806101539u1c2a797wc5aa2afe3c730aa0@mail.gmail.com> <200806110813.35114.ralfluebben@gmx.de> Message-ID: 2008/6/10 Ralf L?bben : > > Does anyone know the actual status of TMix? It seems to be in beta-testing; there is basically-working code but minimal documentation and no support. Jay Aikat (ja unc.edu) could give more details. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: jaikat at email.unc.edu (Jay Aikat) Date: Wed, 11 Jun 2008 13:24:14 -0400 Subject: [Tmrg] Availability of the Evaluation Suite/TMix? In-Reply-To: References: <200806100812.28885.ralfluebben@gmx.de> <7335583a0806101448j376e1a6bw90e325ec3941b2a@mail.gmail.com> <88d780b40806101539u1c2a797wc5aa2afe3c730aa0@mail.gmail.com> <200806110813.35114.ralfluebben@gmx.de> Message-ID: <48500A3E.2010907@email.unc.edu> Ralf and others interested in Tmix code: The Linux version of Tmix has been running in our testbed for several months. We are in our final round of testing and validation and our intent is to provide a well-tested, documented, supported release by the end of the Summer. In the meantime, we can make it available "as is" (little documentation and limited support) to a small number of researchers working with the TCP Evaluation Suite. We would expect that these early users would provide additional testing and contribute their fixes/new features back to us. We are sorry to have to be so restrictive in making the Linux version available, but we believe the limited resources we have are better used in testing/documenting for a solid release rather than supporting the current code. Thank you for your patience. --Jay Aikat. Lachlan Andrew wrote: > 2008/6/10 Ralf L?bben : >> Does anyone know the actual status of TMix? > > It seems to be in beta-testing; there is basically-working code but > minimal documentation and no support. Jay Aikat (ja unc.edu) could > give more details. > > Cheers, > Lachlan > From: ralfluebben at gmx.de (Ralf =?iso-8859-1?q?L=FCbben?=) Date: Fri, 13 Jun 2008 07:49:08 +0200 Subject: [Tmrg] Availability of the Evaluation Suite/TMix? In-Reply-To: <48500A3E.2010907@email.unc.edu> References: <200806100812.28885.ralfluebben@gmx.de> <48500A3E.2010907@email.unc.edu> Message-ID: <200806130749.08159.ralfluebben@gmx.de> Hi Jay, thanks a lot for the information. A preliminary version would be fine. Certainly, I would support you with testing and contribute my fixes/features back to you. Hopefully, I will have some time mid of next month, I will contact you then. Cheers, Ralf Am Mittwoch 11 Juni 2008 19:24:14 schrieb Jay Aikat: > Ralf and others interested in Tmix code: > The Linux version of Tmix has been running in our testbed for several > months. We are in our final round of testing and validation and our intent > is to provide a well-tested, documented, supported release by the end of > the Summer. > > In the meantime, we can make it available "as is" (little documentation and > limited support) to a small number of researchers working with the TCP > Evaluation Suite. We would expect that these early users would provide > additional testing and contribute their fixes/new features back to us. > > We are sorry to have to be so restrictive in making the Linux version > available, but we believe the limited resources we have are better used in > testing/documenting for a solid release rather than supporting the current > code. Thank you for your patience. > --Jay Aikat. > > Lachlan Andrew wrote: > > 2008/6/10 Ralf L?bben : > >> Does anyone know the actual status of TMix? > > > > It seems to be in beta-testing; there is basically-working code but > > minimal documentation and no support. Jay Aikat (ja unc.edu) could > > give more details. > > > > Cheers, > > Lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Fri, 27 Jun 2008 17:05:25 -0700 Subject: [Tmrg] TCP evaluation suite Message-ID: Greetings all, After a long delay, here is a draft Internet Draft based on the PFLDnet TCP evaluation suite paper. An initial WAN-in-Lab implementation should be ready in a couple of months, and I believe Wang Gang is still working part time on the NS implementation. Once that is ready, we can start setting actual parameters by comparing the results against measurement studies. There are still lots of "TBD" parameters, and comments (in bold in the .html). - Currently, statistics are listed as being measured over the last *half* of the experiment. On a 100s experiment, that gives 50s to avoid the effect of simultaneous flows slow-starting. However (a) on long experiments, it is overkill for avoiding the initial slow-start, but (b) it may be too short for the number of flows to reach "equilibrium". I vote that we recommend a particular warm-up time (say 50s) independent of the length of the experiment, and start the system "near equilibrium" (not from zero flows). - When comparing with "standard TCP", we specify which recent proposals are included and not. Which proposals should we list? I vote for: = SACK (included) = ECN (not included) = Window scaling (included, even though many Windows machines don't use it) = Forward RTO (included) = Appropriate Byte Counting (It is on in Windows, and was briefly on in Linux. If we don't include it, should we account for Linux's suppression of delayed ACKs during initial slow start when comparing against measurements?) - Currently, some of it is written in the style of a paper ("We use two flows..."). I think we should make it prescriptive ("Do this") instead of descriptive. Should we also use SHOULD, MAY etc to make clear what is part of the "core" tests? - Once there is an NS version of the test, it would be good to check that the RTTs actually give a good approximation to measured RTT distributions - All traffic loads have yet to be determined. Sally suggested setting these by matching the loss rate observed in the Internet with the loss rate arising from newReno. Since many current web servers use Linux/CUBIC, should we instead match the measurements to simulations of an appropriate mixture of newReno and CUBIC? - How polished does something need to be to be registered as an "Internet Draft"? Can we submit this as a -00 draft, or should we get more consensus first? Anyone who wants to contribute to this draft is welcome to, whether or not you were involved with the PFLDnet paper. The author list is starting from scratch. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan -------------- next part -------------- A non-text attachment was scrubbed... Name: draft-irtf-tmrg-tests-00.xml Type: text/xml Size: 58887 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20080627/18148a6d/attachment-0001.xml -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: draft-irtf-tmrg-tests-00.txt Url: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20080627/18148a6d/attachment-0001.txt -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20080627/18148a6d/attachment-0001.html From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sat, 28 Jun 2008 11:42:00 -0700 Subject: [Tmrg] TCP evaluation suite In-Reply-To: <48667F01.30005@purdue.edu> References: <48667F01.30005@purdue.edu> Message-ID: 2008/6/28 Roman Chertov : > Hello Lachlan, > I think it would be worth while to include experiments which deal with > admission of new flows into the current steady state. Such an experiment > will allow to examine the impact of the startup stage on the already > established flows. > > Roman Greetings Roman, Thanks for your input. I agree that we need to study arriving flows. That was the goal of sections 4.3 and 4.4. Are you suggesting changing them or adding something new? We currently don't consider the impact of "slow start" on long-lived flows; is that what you are suggesting? (I hope you don't mind my Cc'ing your good suggestion to the list.) Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: rchertov at purdue.edu (Roman Chertov) Date: Sat, 28 Jun 2008 12:10:43 -0700 Subject: [Tmrg] TCP evaluation suite In-Reply-To: References: <48667F01.30005@purdue.edu> Message-ID: <48668CB3.1050006@purdue.edu> Lachlan Andrew wrote: > 2008/6/28 Roman Chertov : >> Hello Lachlan, >> I think it would be worth while to include experiments which deal with >> admission of new flows into the current steady state. Such an experiment >> will allow to examine the impact of the startup stage on the already >> established flows. >> >> Roman > > Greetings Roman, > > Thanks for your input. I agree that we need to study arriving flows. > That was the goal of sections 4.3 and 4.4. Are you suggesting > changing them or adding something new? We currently don't consider > the impact of "slow start" on long-lived flows; is that what you are > suggesting? Yes, I think it would be valuable to look at scenarios where there is a collection of long-lived flows and a collection of short-lived flows. The short-lived flows send enough data for the slow start to increase the window several times, but not enough data to transition into congestion avoidance. This would be analogous to interleaving large and small file transfers. The obvious metrics to vary would be the arrival rate of short flows and the ratio of long-lived to short-lived flows. > > (I hope you don't mind my Cc'ing your good suggestion to the list.) Not a problem. Roman > > Cheers, > Lachlan > From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 30 Jun 2008 13:26:04 -0700 Subject: [Tmrg] TCP evaluation suite In-Reply-To: References: Message-ID: <46A9D1CE-0660-4511-9B6F-E0D83A26E4E7@mac.com> Lachlan - > After a long delay, here is a draft Internet Draft based on the > PFLDnet TCP evaluation suite paper. ... > - Currently, some of it is written in the style of a paper ("We use > two flows..."). I think we should make it prescriptive ("Do this") > instead of descriptive. Should we also use SHOULD, MAY etc to make > clear what is part of the "core" tests? I haven't read this version yet, but I don't think it needs SHOULD, MAY, etc. Those are usually used only for protocols. (For Informational RFCs that were also targeted as Best Current Practice, and became Best Current Practice RFCs, you could look at RFC 5033, or RFC 2914.) ... > - All traffic loads have yet to be determined. Sally suggested > setting these by matching the loss rate observed in the Internet with > the loss rate arising from newReno. Since many current web servers > use Linux/CUBIC, should we instead match the measurements to > simulations of an appropriate mixture of newReno and CUBIC? Either way seems ok by me. Though there is a lot of TCP traffic out there that is not from web servers... > - How polished does something need to be to be registered as an > "Internet Draft"? Can we submit this as a -00 draft, or should we get > more consensus first? I think it is fine to submit the draft as is, as -00. An initial version of a draft is not taken to represent consensus. Take care, - Sally http://www.icir.org/floyd/ From: fred at cisco.com (Fred Baker) Date: Mon, 30 Jun 2008 15:32:32 -0700 Subject: [Tmrg] TCP evaluation suite In-Reply-To: <46A9D1CE-0660-4511-9B6F-E0D83A26E4E7@mac.com> References: <46A9D1CE-0660-4511-9B6F-E0D83A26E4E7@mac.com> Message-ID: On Jun 30, 2008, at 1:26 PM, Sally Floyd wrote: > I haven't read this version yet, but I don't think it needs SHOULD, > MAY, etc. Those are usually used only for protocols. (For > Informational RFCs that were also targeted as Best Current Practice, > and became Best Current Practice RFCs, you could look at RFC 5033, > or RFC 2914.) actually, they are intended for requirements documents. The first RFC where one could construe "SHOULD" being use that way is RFC 827, in which Eric indicates that in a certain circumstance a "gateway" SHOULD do something in particular. But it's not the word that is capitalized per se, it's two sentences. The first document in which it is defined as we use it now is RFC 1122/1123 and later 1812, and the question before the house (per section 1.3.2 of RFC 1122) is implementation compliance - an implementation can be said to be "conditionally compliant" if it implements all the MUSTs, and "fully compliant" if it also implements the SHOULDs. To be honest, I think most documents that use RFC 2119 language mis-use it. I am amused by RFC 2119's "guidance": Imperatives of the type defined in this memo must be used with care and sparingly. In particular, they MUST only be used where it is actually required for interoperation or to limit behavior which has potential for causing harm (e.g., limiting retransmisssions) For example, they must not be used to try to impose a particular method on implementors where the method is not required for interoperability. I wish he had said Imperatives of the type defined in this memo MUST be used with care and sparingly.... :-) From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Fri, 25 Jul 2008 16:52:43 -0700 Subject: [Tmrg] Measuring "burst completion times" Message-ID: Greetings all, An internet draft of the test suite has been submitted . It has lots of questions and points for discussion (in bold in the html version ), so feel free to start a thread. Tom Q is busy implementing it on WAN-in-Lab. We're currently having an issue with the "transfer time per flow versus file size" statistic for the "basic" scenario (Section 3.1.3). (As an aside: That is listed as an optional extra, but seems an important statistic. Should it be added to the core statistics?) It was agreed to use Tmix, which generates non-greedy connections. a) Does everyone agree that we should measure the transfer times of *bursts* (Tmix "ADUs") vs size, instead of connection times vs size? b) If so, should we use the time (and length) of a "request-response" pair, rather than an individual burst? Reasons for this are: i) That is what the user actually observes ii) Tmix currently records that (at least the Linux version does) iii) Measuring the sending time of a single burst would be difficult in a distributed network: The sending application can't know when the last byte was received -- it just sees when it was swallowed by the socket layer. This is slightly problematic as it could contain two slow-starts, but again, this is what the client actually observes. c) Tmix allows pauses between the request and response. We propose that we massage the traces by moving that pause to be after the response. That will allow us to measure the impact of TCP on the request-response pair but will keep the load roughly unchanged. d) The Tmix traces we're using have some "concurrent" connections, which are not sequences of request-response pairs. We propose that we ignore these for calculating the burst duration vs burst size statistic. Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: krasnoj at gmx.at (Stefan Hirschmann) Date: Wed, 30 Jul 2008 12:32:51 +0200 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite Message-ID: <20080730103251.299310@gmx.net> Hi! In the "Common TCP Evaluation Suite draft-irtf-tmrg-tests-00" there is the section: "3.2. Delay/throughput tradeoff as function of queue size" describing the buffer sizes of the routers, but only for the access link scenario. I wanted to extend the values to the other scenarios and noticed a problem with it. The BDP of the Dial-Up Link scenario is 64Kbps * 0.1 s / 8 = 0.8 KByte -> 0.8 / 1.5 = 0,53 packets. So even if I use the BDP the value is much too small. A rounding to one is IMHO also not realistic. What value should be used as a minimum buffer size and why? Also when I looked at the document I noticed that there is no scenario between 64 kbps and 11 Mbps. By now, nearly the whole ADSL edge connections are larger than 64 kbps and smaller than 11 Mbps. Is there a reason why there is no scenario in this range? Cheers Stefan -- GMX startet ShortView.de. Hier findest Du Leute mit Deinen Interessen! Jetzt dabei sein: http://www.shortview.de/wasistshortview.php?mc=sv_ext_mf at gmx From: sallyfloyd at mac.com (Sally Floyd) Date: Wed, 30 Jul 2008 14:50:18 -0700 Subject: [Tmrg] a new co-chair for TMRG Message-ID: <0EF6FFC6-14A8-4DA2-9C99-542EA9EDF050@mac.com> To TMRG about a new TMRG co-chair: Because I am working towards retirement (trying to wrap up my current projects), it is time for me to find a co-chair for TMRG. I have asked Lachlan Andrew if he would be willing to do this, and he said he would. Lachlan strikes me as an ideal person to be co-chair of TMRG - he organized the November meeting to produce the paper on "Towards a Common TCP Evaluation Suite", and produced the corresponding internet-draft on "Common TCP Evaluation Suite". So this is email to the TMRG mailing list, to check that there is support for Lachlan to become co-chair of the research group. (TMRG was chartered to be a fairly low-activity research group, to produce a few documents on the models that we use in simulations, analysis, and experiments in evaluating transport protocols. The TRMG web site is at: "http://www.irtf.org/charter.php?gtype=rg&group=tmrg".) Many thanks, - Sally http://www.icir.org/floyd/ From: ldunn at cisco.com (Lawrence D. Dunn) Date: Wed, 30 Jul 2008 17:25:52 -0500 Subject: [Tmrg] a new co-chair for TMRG In-Reply-To: <0EF6FFC6-14A8-4DA2-9C99-542EA9EDF050@mac.com> References: <0EF6FFC6-14A8-4DA2-9C99-542EA9EDF050@mac.com> Message-ID: Sally, Support. Larry -- On Jul 30, 2008, at 4:50 PM, Sally Floyd wrote: > To TMRG about a new TMRG co-chair: > > Because I am working towards retirement (trying to wrap up my current > projects), it is time for me to find a co-chair for TMRG. I have > asked Lachlan Andrew if he would be willing to do this, and he said > he would. Lachlan strikes me as an ideal person to be co-chair of > TMRG - he organized the November meeting to produce the paper on > "Towards a Common TCP Evaluation Suite", and produced the > corresponding > internet-draft on "Common TCP Evaluation Suite". > > So this is email to the TMRG mailing list, to check that there is > support for Lachlan to become co-chair of the research group. > > (TMRG was chartered to be a fairly low-activity research group, to > produce > a few documents on the models that we use in simulations, analysis, > and experiments in evaluating transport protocols. The TRMG web > site is at: "http://www.irtf.org/charter.php?gtype=rg&group=tmrg".) > > Many thanks, > - Sally > http://www.icir.org/floyd/ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest From: michael.welzl at uibk.ac.at (Michael Welzl) Date: Thu, 31 Jul 2008 07:56:34 +0200 Subject: [Tmrg] a new co-chair for TMRG References: <0EF6FFC6-14A8-4DA2-9C99-542EA9EDF050@mac.com> Message-ID: <001c01c8f2d2$306a0e90$0200a8c0@fun> Hi, I support this. Cheers, Michael ----- Original Message ----- From: "Sally Floyd" To: "tmrg" Sent: Wednesday, July 30, 2008 11:50 PM Subject: [Tmrg] a new co-chair for TMRG > To TMRG about a new TMRG co-chair: > > Because I am working towards retirement (trying to wrap up my current > projects), it is time for me to find a co-chair for TMRG. I have > asked Lachlan Andrew if he would be willing to do this, and he said > he would. Lachlan strikes me as an ideal person to be co-chair of > TMRG - he organized the November meeting to produce the paper on > "Towards a Common TCP Evaluation Suite", and produced the corresponding > internet-draft on "Common TCP Evaluation Suite". > > So this is email to the TMRG mailing list, to check that there is > support for Lachlan to become co-chair of the research group. > > (TMRG was chartered to be a fairly low-activity research group, to > produce > a few documents on the models that we use in simulations, analysis, > and experiments in evaluating transport protocols. The TRMG web > site is at: "http://www.irtf.org/charter.php?gtype=rg&group=tmrg".) > > Many thanks, > - Sally > http://www.icir.org/floyd/ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > From: nfonseca at ic.unicamp.br (nfonseca at ic.unicamp.br) Date: Thu, 31 Jul 2008 17:07:37 -0300 (BRT) Subject: [Tmrg] Tmrg-interest Digest, Vol 22, Issue 4 In-Reply-To: References: Message-ID: <3044.143.106.7.122.1217534857.squirrel@webmail.ic.unicamp.br> I support the proposal. > Send Tmrg-interest mailing list submissions to > tmrg-interest at ICSI.Berkeley.EDU > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > or, via email, send a message with subject or body 'help' to > tmrg-interest-request at ICSI.Berkeley.EDU > > You can reach the person managing the list at > tmrg-interest-owner at ICSI.Berkeley.EDU > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Tmrg-interest digest..." > > > Today's Topics: > > 1. a new co-chair for TMRG (Sally Floyd) > 2. Re: a new co-chair for TMRG (Lawrence D. Dunn) > 3. Re: a new co-chair for TMRG (Michael Welzl) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 30 Jul 2008 14:50:18 -0700 > From: Sally Floyd > Subject: [Tmrg] a new co-chair for TMRG > To: tmrg > Message-ID: <0EF6FFC6-14A8-4DA2-9C99-542EA9EDF050 at mac.com> > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > To TMRG about a new TMRG co-chair: > > Because I am working towards retirement (trying to wrap up my current > projects), it is time for me to find a co-chair for TMRG. I have > asked Lachlan Andrew if he would be willing to do this, and he said > he would. Lachlan strikes me as an ideal person to be co-chair of > TMRG - he organized the November meeting to produce the paper on > "Towards a Common TCP Evaluation Suite", and produced the corresponding > internet-draft on "Common TCP Evaluation Suite". > > So this is email to the TMRG mailing list, to check that there is > support for Lachlan to become co-chair of the research group. > > (TMRG was chartered to be a fairly low-activity research group, to > produce > a few documents on the models that we use in simulations, analysis, > and experiments in evaluating transport protocols. The TRMG web > site is at: "http://www.irtf.org/charter.php?gtype=rg&group=tmrg".) > > Many thanks, > - Sally > http://www.icir.org/floyd/ > > > > ------------------------------ > > Message: 2 > Date: Wed, 30 Jul 2008 17:25:52 -0500 > From: "Lawrence D. Dunn" > Subject: Re: [Tmrg] a new co-chair for TMRG > To: Sally Floyd , "Lawrence D. Dunn" > > Cc: tmrg > Message-ID: > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > Sally, > Support. > > Larry > -- > > On Jul 30, 2008, at 4:50 PM, Sally Floyd wrote: > >> To TMRG about a new TMRG co-chair: >> >> Because I am working towards retirement (trying to wrap up my current >> projects), it is time for me to find a co-chair for TMRG. I have >> asked Lachlan Andrew if he would be willing to do this, and he said >> he would. Lachlan strikes me as an ideal person to be co-chair of >> TMRG - he organized the November meeting to produce the paper on >> "Towards a Common TCP Evaluation Suite", and produced the >> corresponding >> internet-draft on "Common TCP Evaluation Suite". >> >> So this is email to the TMRG mailing list, to check that there is >> support for Lachlan to become co-chair of the research group. >> >> (TMRG was chartered to be a fairly low-activity research group, to >> produce >> a few documents on the models that we use in simulations, analysis, >> and experiments in evaluating transport protocols. The TRMG web >> site is at: "http://www.irtf.org/charter.php?gtype=rg&group=tmrg".) >> >> Many thanks, >> - Sally >> http://www.icir.org/floyd/ >> >> _______________________________________________ >> Tmrg-interest mailing list >> Tmrg-interest at ICSI.Berkeley.EDU >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > > ------------------------------ > > Message: 3 > Date: Thu, 31 Jul 2008 07:56:34 +0200 > From: "Michael Welzl" > Subject: Re: [Tmrg] a new co-chair for TMRG > To: "tmrg" > Message-ID: <001c01c8f2d2$306a0e90$0200a8c0 at fun> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > I support this. > > Cheers, > Michael > > ----- Original Message ----- > From: "Sally Floyd" > To: "tmrg" > Sent: Wednesday, July 30, 2008 11:50 PM > Subject: [Tmrg] a new co-chair for TMRG > > >> To TMRG about a new TMRG co-chair: >> >> Because I am working towards retirement (trying to wrap up my current >> projects), it is time for me to find a co-chair for TMRG. I have >> asked Lachlan Andrew if he would be willing to do this, and he said >> he would. Lachlan strikes me as an ideal person to be co-chair of >> TMRG - he organized the November meeting to produce the paper on >> "Towards a Common TCP Evaluation Suite", and produced the corresponding >> internet-draft on "Common TCP Evaluation Suite". >> >> So this is email to the TMRG mailing list, to check that there is >> support for Lachlan to become co-chair of the research group. >> >> (TMRG was chartered to be a fairly low-activity research group, to >> produce >> a few documents on the models that we use in simulations, analysis, >> and experiments in evaluating transport protocols. The TRMG web >> site is at: "http://www.irtf.org/charter.php?gtype=rg&group=tmrg".) >> >> Many thanks, >> - Sally >> http://www.icir.org/floyd/ >> >> _______________________________________________ >> Tmrg-interest mailing list >> Tmrg-interest at ICSI.Berkeley.EDU >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest >> > > > ------------------------------ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > End of Tmrg-interest Digest, Vol 22, Issue 4 > ******************************************** > From: wanggang at research.nec.com.cn (Wang gang) Date: Fri, 1 Aug 2008 09:51:32 +0800 Subject: [Tmrg] Tmrg-interest Digest, Vol 22, Issue 4 References: Message-ID: <00f201c8f379$1f8680c0$c44c1cac@ad.research.nec.com.cn> Dear all, I support Lachlan. Wang Gang. ----- Original Message ----- From: To: Sent: Friday, August 01, 2008 3:00 AM Subject: Tmrg-interest Digest, Vol 22, Issue 4 > Send Tmrg-interest mailing list submissions to > tmrg-interest at ICSI.Berkeley.EDU > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > or, via email, send a message with subject or body 'help' to > tmrg-interest-request at ICSI.Berkeley.EDU > > You can reach the person managing the list at > tmrg-interest-owner at ICSI.Berkeley.EDU > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Tmrg-interest digest..." > > > Today's Topics: > > 1. a new co-chair for TMRG (Sally Floyd) > 2. Re: a new co-chair for TMRG (Lawrence D. Dunn) > 3. Re: a new co-chair for TMRG (Michael Welzl) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 30 Jul 2008 14:50:18 -0700 > X-Virus: 7 > From: Sally Floyd > Subject: [Tmrg] a new co-chair for TMRG > To: tmrg > Message-ID: <0EF6FFC6-14A8-4DA2-9C99-542EA9EDF050 at mac.com> > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > To TMRG about a new TMRG co-chair: > > Because I am working towards retirement (trying to wrap up my current > projects), it is time for me to find a co-chair for TMRG. I have > asked Lachlan Andrew if he would be willing to do this, and he said > he would. Lachlan strikes me as an ideal person to be co-chair of > TMRG - he organized the November meeting to produce the paper on > "Towards a Common TCP Evaluation Suite", and produced the corresponding > internet-draft on "Common TCP Evaluation Suite". > > So this is email to the TMRG mailing list, to check that there is > support for Lachlan to become co-chair of the research group. > > (TMRG was chartered to be a fairly low-activity research group, to > produce > a few documents on the models that we use in simulations, analysis, > and experiments in evaluating transport protocols. The TRMG web > site is at: "http://www.irtf.org/charter.php?gtype=rg&group=tmrg".) > > Many thanks, > - Sally > http://www.icir.org/floyd/ > > > > ------------------------------ > > Message: 2 > Date: Wed, 30 Jul 2008 17:25:52 -0500 > X-Virus: 7 > From: "Lawrence D. Dunn" > Subject: Re: [Tmrg] a new co-chair for TMRG > To: Sally Floyd , "Lawrence D. Dunn" > > Cc: tmrg > Message-ID: > Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes > > Sally, > Support. > > Larry > -- > > On Jul 30, 2008, at 4:50 PM, Sally Floyd wrote: > >> To TMRG about a new TMRG co-chair: >> >> Because I am working towards retirement (trying to wrap up my current >> projects), it is time for me to find a co-chair for TMRG. I have >> asked Lachlan Andrew if he would be willing to do this, and he said >> he would. Lachlan strikes me as an ideal person to be co-chair of >> TMRG - he organized the November meeting to produce the paper on >> "Towards a Common TCP Evaluation Suite", and produced the >> corresponding >> internet-draft on "Common TCP Evaluation Suite". >> >> So this is email to the TMRG mailing list, to check that there is >> support for Lachlan to become co-chair of the research group. >> >> (TMRG was chartered to be a fairly low-activity research group, to >> produce >> a few documents on the models that we use in simulations, analysis, >> and experiments in evaluating transport protocols. The TRMG web >> site is at: "http://www.irtf.org/charter.php?gtype=rg&group=tmrg".) >> >> Many thanks, >> - Sally >> http://www.icir.org/floyd/ >> >> _______________________________________________ >> Tmrg-interest mailing list >> Tmrg-interest at ICSI.Berkeley.EDU >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > > ------------------------------ > > Message: 3 > Date: Thu, 31 Jul 2008 07:56:34 +0200 > X-Virus: 7 > From: "Michael Welzl" > Subject: Re: [Tmrg] a new co-chair for TMRG > To: "tmrg" > Message-ID: <001c01c8f2d2$306a0e90$0200a8c0 at fun> > Content-Type: text/plain; charset="iso-8859-1" > > Hi, > > I support this. > > Cheers, > Michael > > ----- Original Message ----- > From: "Sally Floyd" > To: "tmrg" > Sent: Wednesday, July 30, 2008 11:50 PM > Subject: [Tmrg] a new co-chair for TMRG > > >> To TMRG about a new TMRG co-chair: >> >> Because I am working towards retirement (trying to wrap up my current >> projects), it is time for me to find a co-chair for TMRG. I have >> asked Lachlan Andrew if he would be willing to do this, and he said >> he would. Lachlan strikes me as an ideal person to be co-chair of >> TMRG - he organized the November meeting to produce the paper on >> "Towards a Common TCP Evaluation Suite", and produced the corresponding >> internet-draft on "Common TCP Evaluation Suite". >> >> So this is email to the TMRG mailing list, to check that there is >> support for Lachlan to become co-chair of the research group. >> >> (TMRG was chartered to be a fairly low-activity research group, to >> produce >> a few documents on the models that we use in simulations, analysis, >> and experiments in evaluating transport protocols. The TRMG web >> site is at: "http://www.irtf.org/charter.php?gtype=rg&group=tmrg".) >> >> Many thanks, >> - Sally >> http://www.icir.org/floyd/ >> >> _______________________________________________ >> Tmrg-interest mailing list >> Tmrg-interest at ICSI.Berkeley.EDU >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest >> > > > ------------------------------ > > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > > > End of Tmrg-interest Digest, Vol 22, Issue 4 > ******************************************** > From: sallyfloyd at mac.com (Sally Floyd) Date: Mon, 04 Aug 2008 21:41:47 -0700 Subject: [Tmrg] a new co-chair for TMRG In-Reply-To: <20080804144344.8AF74824417@lawyers.icir.org> References: <20080804144344.8AF74824417@lawyers.icir.org> Message-ID: This is to announce that Lachlan Andrew is now the co-chair of TMRG. All of the feedback was in support. Many thanks to Lachlan for agreeing to take this on! - Sally http://www.icir.org/floyd/ From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Mon, 4 Aug 2008 22:05:02 -0700 Subject: [Tmrg] a new co-chair for TMRG In-Reply-To: References: <20080804144344.8AF74824417@lawyers.icir.org> Message-ID: Thanks Sally, and everyone. Cheers, Lachlan 2008/8/4 Sally Floyd : > This is to announce that Lachlan Andrew is now the > co-chair of TMRG. All of the feedback was in support. > > Many thanks to Lachlan for agreeing to take this on! > > - Sally > http://www.icir.org/floyd/ > > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Sun, 31 Aug 2008 15:21:51 -0700 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite In-Reply-To: <20080730103251.299310@gmx.net> References: <20080730103251.299310@gmx.net> Message-ID: Greetings Stefan, Thanks for your interest in the test suite. I apologise for the long delay in getting back to you. 2008/7/30 Stefan Hirschmann : > > In the "Common TCP Evaluation Suite draft-irtf-tmrg-tests-00" there is the section: > "3.2. Delay/throughput tradeoff as function of queue size" > describing the buffer sizes of the routers, but only for the access link scenario. > > I wanted to extend the values to the other scenarios and noticed a problem with it. > The BDP of the Dial-Up Link scenario is 64Kbps * 0.1 s / 8 = 0.8 KByte -> 0.8 / 1.5 = 0,53 packets. > > So even if I use the BDP the value is much too small. A rounding to one is IMHO also not realistic. What value should be used as a minimum buffer size and why? The Dial-Up scenario is there partly for POTS modems, and partly for GPRS. You should find out the buffer size used by either one of those (and then it would be great to post it to the list!). If you have access to a dial-up connection, you could try to measure the buffer size: Ping the next-hop node with an idle link, and then while downloading something large. The difference in RTTs will give a good estimate of the buffer size. I don't have figures for GPRS either. (Lars?) However, GPRS typically has lots of delay (500ms or more). This is made complicated by opportunistic scheduling etc., but if we assume it is all queueing delay, then it would be about 3 packets. > Also when I looked at the document I noticed that there is no scenario between 64 kbps and 11 Mbps. By now, nearly the whole ADSL edge connections are larger than 64 kbps and smaller than 11 Mbps. Is there a reason why there is no scenario in this range? We are trying to keep the suite small. The focus has been on larger bit rates because that is where most problems with TCP have been found in the past, but also wanted to make sure that any proposal still works at low rates (64k). I think the general opinion was that the medium-rate problems would mostly also show up in the low-rate or high-rate tests. You are right that should consider having a 1Mbit/s test too. How about we wait until we have a prototype implementation, and then we can see how long the tests take, and whether we can afford to add an extra one? Cheers, Lachlan -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: krasnoj at gmx.at (Stefan Hirschmann) Date: Wed, 01 Oct 2008 20:34:22 +0200 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite In-Reply-To: <48E3C1F5.40906@gmx.at> References: <20080730103251.299310@gmx.net> <48E3C1F5.40906@gmx.at> Message-ID: <48E3C2AE.3010806@gmx.at> Stefan Hirschmann wrote: > Greeting Andrew and all other readers, > >> Lachlan Andrew wrote: >>> 2008/7/30 Stefan Hirschmann : > >> Greetings Stefan, >> >> Thanks for your interest in the test suite. I apologise for the long >> delay in getting back to you. > > I apologize for this long delay too. But it was not easy to find anyone > with a 56K POTS modem still in use. > > >>> In the "Common TCP Evaluation Suite draft-irtf-tmrg-tests-00" there is the section: >>> "3.2. Delay/throughput tradeoff as function of queue size" >>> describing the buffer sizes of the routers, but only for the access link scenario. >>> >>> I wanted to extend the values to the other scenarios and noticed a problem with it. >>> The BDP of the Dial-Up Link scenario is 64Kbps * 0.1 s / 8 = 0.8 KByte -> 0.8 / 1.5 = 0,53 packets. >>> >>> So even if I use the BDP the value is much too small. A rounding to one is IMHO also not realistic. What value should be used as a minimum buffer size and why? >> The Dial-Up scenario is there partly for POTS modems, and partly for >> GPRS. You should find out the buffer size used by either one of those >> (and then it would be great to post it to the list!). >> >> If you have access to a dial-up connection, you could try to measure >> the buffer size: Ping the next-hop node with an idle link, and then >> while downloading something large. The difference in RTTs will give a >> good estimate of the buffer size. > > > OK I have done it. The test were made: > DATE: 2008/10/01 around 19:30 > Used 56K POTS Provider: Tele2 Austria > Operating System: Windows XP Media Centre Edition > Large background traffic: A linux kernel image from ftp://ftp2.kernel.org > > The exact testprotocol is at the end of the email. > The most important datas are: > uncongested: > Minimum = 134ms, Maximum = 148ms, Mean = 141ms > > congested: > Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms > > The correct formula should be: > max(queuing time) = max(congested) - min(uncongested) > 8407 ms = 8541 ms - 134 ms > > 56 KBit/s is 7 KByte/s. 6 KByte/s is a realistic value for the real > usable value. In this case: > time * bandwidth = amount of data > 8,541 s * 6 KByte/s = 51,246 KByte > > If you say, that the packetsize is 1,5 KByte than: > 51,246 KByte / 1.5 KByte = 34,164 > > So 35 is the Queuesize in packets. > > > > Cheers Stefan > > > > Now the complete console log (was a German Windows vesion): > =============================================================================================== > Microsoft Windows XP [Version 5.1.2600] > (C) Copyright 1985-2001 Microsoft Corp. > > C:\Dokumente und Einstellungen\Leo>ping www.google.at > > Ping www.l.google.com [209.85.129.147] mit 32 Bytes Daten: > > Antwort von 209.85.129.147: Bytes=32 Zeit=148ms TTL=244 > Antwort von 209.85.129.147: Bytes=32 Zeit=146ms TTL=244 > Antwort von 209.85.129.147: Bytes=32 Zeit=136ms TTL=244 > Antwort von 209.85.129.147: Bytes=32 Zeit=134ms TTL=244 > > Ping-Statistik f?r 209.85.129.147: > Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), > Ca. Zeitangaben in Millisek.: > Minimum = 134ms, Maximum = 148ms, Mittelwert = 141ms > > > C:\Dokumente und Einstellungen\Leo>ping -w 9999 www.google.at > > Ping www.l.google.com [209.85.129.104] mit 32 Bytes Daten: > > Antwort von 209.85.129.104: Bytes=32 Zeit=7027ms TTL=244 > Zeit?berschreitung der Anforderung. > Antwort von 209.85.129.104: Bytes=32 Zeit=8541ms TTL=244 > Antwort von 209.85.129.104: Bytes=32 Zeit=5963ms TTL=244 > > Ping-Statistik f?r 209.85.129.104: > Pakete: Gesendet = 4, Empfangen = 3, Verloren = 1 (25% Verlust), > Ca. Zeitangaben in Millisek.: > Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms > > C:\Dokumente und Einstellungen\Leo> > From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Thu, 2 Oct 2008 04:42:16 +1000 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite In-Reply-To: <48E3C1F5.40906@gmx.at> References: <20080730103251.299310@gmx.net> <48E3C1F5.40906@gmx.at> Message-ID: Thanks Stefan. Those numbers are interesting. I'm surprised that there was 8s delay when congested. I'm wondering if ping packets are treated differently. (Many systems give ICMP packets lower priority.) Still, 35 packets sounds a reasonable buffer size. Does anyone else on the list have any data to support or contradict this? My parents-in-law use dial-up, so I'll try to check their connection soon. Cheers, Lachlan 2008/10/2 Stefan Hirschmann : > Greeting Andrew and all other readers, > >> Lachlan Andrew wrote: >>> 2008/7/30 Stefan Hirschmann : > >> Greetings Stefan, >> >> Thanks for your interest in the test suite. I apologise for the long >> delay in getting back to you. > > I apologize for this long delay too. But it was not easy to find anyone > with a 56K POTS modem still in use. > > >>> In the "Common TCP Evaluation Suite draft-irtf-tmrg-tests-00" there is the section: >>> "3.2. Delay/throughput tradeoff as function of queue size" >>> describing the buffer sizes of the routers, but only for the access link scenario. >>> >>> I wanted to extend the values to the other scenarios and noticed a problem with it. >>> The BDP of the Dial-Up Link scenario is 64Kbps * 0.1 s / 8 = 0.8 KByte -> 0.8 / 1.5 = 0,53 packets. >>> >>> So even if I use the BDP the value is much too small. A rounding to one is IMHO also not realistic. What value should be used as a minimum buffer size and why? >> >> The Dial-Up scenario is there partly for POTS modems, and partly for >> GPRS. You should find out the buffer size used by either one of those >> (and then it would be great to post it to the list!). >> >> If you have access to a dial-up connection, you could try to measure >> the buffer size: Ping the next-hop node with an idle link, and then >> while downloading something large. The difference in RTTs will give a >> good estimate of the buffer size. > > > OK I have done it. The test were made: > DATE: 2008/10/01 around 19:30 > Used 56K POTS Provider: Tele2 Austria > Operating System: Windows XP Media Centre Edition > Large background traffic: A linux kernel image from ftp://ftp2.kernel.org > > The exact testprotocol is at the end of the email. > The most important datas are: > uncongested: > Minimum = 134ms, Maximum = 148ms, Mean = 141ms > > congested: > Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms > > The correct formula should be: > max(queuing time) = max(congested) - min(uncongested) > 8407 ms = 8541 ms - 134 ms > > 56 KBit/s is 7 KByte/s. 6 KByte/s is a realistic value for the real > usable value. In this case: > time * bandwidth = amount of data > 8,541 s * 6 KByte/s = 51,246 KByte > > If you say, that the packetsize is 1,5 KByte than: > 51,246 KByte / 1.5 KByte = 34,164 > > So 35 is the Queuesize in packets. > > > > Cheers Stefan > > > > Now the complete console log (was a German Windows vesion): > =============================================================================================== > Microsoft Windows XP [Version 5.1.2600] > (C) Copyright 1985-2001 Microsoft Corp. > > C:\Dokumente und Einstellungen\Leo>ping www.google.at > > Ping www.l.google.com [209.85.129.147] mit 32 Bytes Daten: > > Antwort von 209.85.129.147: Bytes=32 Zeit=148ms TTL=244 > Antwort von 209.85.129.147: Bytes=32 Zeit=146ms TTL=244 > Antwort von 209.85.129.147: Bytes=32 Zeit=136ms TTL=244 > Antwort von 209.85.129.147: Bytes=32 Zeit=134ms TTL=244 > > Ping-Statistik f?r 209.85.129.147: > Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), > Ca. Zeitangaben in Millisek.: > Minimum = 134ms, Maximum = 148ms, Mittelwert = 141ms > > > C:\Dokumente und Einstellungen\Leo>ping -w 9999 www.google.at > > Ping www.l.google.com [209.85.129.104] mit 32 Bytes Daten: > > Antwort von 209.85.129.104: Bytes=32 Zeit=7027ms TTL=244 > Zeit?berschreitung der Anforderung. > Antwort von 209.85.129.104: Bytes=32 Zeit=8541ms TTL=244 > Antwort von 209.85.129.104: Bytes=32 Zeit=5963ms TTL=244 > > Ping-Statistik f?r 209.85.129.104: > Pakete: Gesendet = 4, Empfangen = 3, Verloren = 1 (25% Verlust), > Ca. Zeitangaben in Millisek.: > Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms > > C:\Dokumente und Einstellungen\Leo> > -- Lachlan Andrew Dept of Computer Science, Caltech 1200 E California Blvd, Mail Code 256-80, Pasadena CA 91125, USA Ph: +1 (626) 395-8820 Fax: +1 (626) 568-3603 http://netlab.caltech.edu/lachlan From: quetchen at caltech.edu (Tom Quetchenbach) Date: Wed, 01 Oct 2008 13:04:58 -0700 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite In-Reply-To: References: <20080730103251.299310@gmx.net> <48E3C1F5.40906@gmx.at> Message-ID: <48E3D7EA.8030703@caltech.edu> My ISP gives me dial-up access as a backup to my DSL, so I'll try to play around with it at some point. -Tom Lachlan Andrew wrote: > Thanks Stefan. > > Those numbers are interesting. I'm surprised that there was 8s delay > when congested. I'm wondering if ping packets are treated > differently. (Many systems give ICMP packets lower priority.) Still, > 35 packets sounds a reasonable buffer size. > > Does anyone else on the list have any data to support or contradict > this? My parents-in-law use dial-up, so I'll try to check their > connection soon. > > Cheers, > Lachlan > > 2008/10/2 Stefan Hirschmann : >> Greeting Andrew and all other readers, >> >>> Lachlan Andrew wrote: >>>> 2008/7/30 Stefan Hirschmann : >>> Greetings Stefan, >>> >>> Thanks for your interest in the test suite. I apologise for the long >>> delay in getting back to you. >> I apologize for this long delay too. But it was not easy to find anyone >> with a 56K POTS modem still in use. >> >> >>>> In the "Common TCP Evaluation Suite draft-irtf-tmrg-tests-00" there is the section: >>>> "3.2. Delay/throughput tradeoff as function of queue size" >>>> describing the buffer sizes of the routers, but only for the access link scenario. >>>> >>>> I wanted to extend the values to the other scenarios and noticed a problem with it. >>>> The BDP of the Dial-Up Link scenario is 64Kbps * 0.1 s / 8 = 0.8 KByte -> 0.8 / 1.5 = 0,53 packets. >>>> >>>> So even if I use the BDP the value is much too small. A rounding to one is IMHO also not realistic. What value should be used as a minimum buffer size and why? >>> The Dial-Up scenario is there partly for POTS modems, and partly for >>> GPRS. You should find out the buffer size used by either one of those >>> (and then it would be great to post it to the list!). >>> >>> If you have access to a dial-up connection, you could try to measure >>> the buffer size: Ping the next-hop node with an idle link, and then >>> while downloading something large. The difference in RTTs will give a >>> good estimate of the buffer size. >> >> OK I have done it. The test were made: >> DATE: 2008/10/01 around 19:30 >> Used 56K POTS Provider: Tele2 Austria >> Operating System: Windows XP Media Centre Edition >> Large background traffic: A linux kernel image from ftp://ftp2.kernel.org >> >> The exact testprotocol is at the end of the email. >> The most important datas are: >> uncongested: >> Minimum = 134ms, Maximum = 148ms, Mean = 141ms >> >> congested: >> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >> >> The correct formula should be: >> max(queuing time) = max(congested) - min(uncongested) >> 8407 ms = 8541 ms - 134 ms >> >> 56 KBit/s is 7 KByte/s. 6 KByte/s is a realistic value for the real >> usable value. In this case: >> time * bandwidth = amount of data >> 8,541 s * 6 KByte/s = 51,246 KByte >> >> If you say, that the packetsize is 1,5 KByte than: >> 51,246 KByte / 1.5 KByte = 34,164 >> >> So 35 is the Queuesize in packets. >> >> >> >> Cheers Stefan >> >> >> >> Now the complete console log (was a German Windows vesion): >> =============================================================================================== >> Microsoft Windows XP [Version 5.1.2600] >> (C) Copyright 1985-2001 Microsoft Corp. >> >> C:\Dokumente und Einstellungen\Leo>ping www.google.at >> >> Ping www.l.google.com [209.85.129.147] mit 32 Bytes Daten: >> >> Antwort von 209.85.129.147: Bytes=32 Zeit=148ms TTL=244 >> Antwort von 209.85.129.147: Bytes=32 Zeit=146ms TTL=244 >> Antwort von 209.85.129.147: Bytes=32 Zeit=136ms TTL=244 >> Antwort von 209.85.129.147: Bytes=32 Zeit=134ms TTL=244 >> >> Ping-Statistik f?r 209.85.129.147: >> Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), >> Ca. Zeitangaben in Millisek.: >> Minimum = 134ms, Maximum = 148ms, Mittelwert = 141ms >> >> >> C:\Dokumente und Einstellungen\Leo>ping -w 9999 www.google.at >> >> Ping www.l.google.com [209.85.129.104] mit 32 Bytes Daten: >> >> Antwort von 209.85.129.104: Bytes=32 Zeit=7027ms TTL=244 >> Zeit?berschreitung der Anforderung. >> Antwort von 209.85.129.104: Bytes=32 Zeit=8541ms TTL=244 >> Antwort von 209.85.129.104: Bytes=32 Zeit=5963ms TTL=244 >> >> Ping-Statistik f?r 209.85.129.104: >> Pakete: Gesendet = 4, Empfangen = 3, Verloren = 1 (25% Verlust), >> Ca. Zeitangaben in Millisek.: >> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >> >> C:\Dokumente und Einstellungen\Leo> >> > > > -- /* Tom Quetchenbach * WAN-in-Lab / Netlab, Dept of Computer Science, Caltech * 1200 E California Blvd, MC 256-80, Pasadena CA 91125 * Lab: (626) 395-8820 || Cell: (863) 370-6402 */ From: quetchen at caltech.edu (Tom Quetchenbach) Date: Fri, 03 Oct 2008 14:32:27 -0700 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite In-Reply-To: <48E3D7EA.8030703@caltech.edu> References: <20080730103251.299310@gmx.net> <48E3C1F5.40906@gmx.at> <48E3D7EA.8030703@caltech.edu> Message-ID: <48E68F6B.6080203@caltech.edu> I tried some experiments with my dial-up connection yesterday. I had to download several files at once to reach what seemed to be close to a maximum delay. Here is a summary of my results: Uncongested: min: 140ms, max: 171ms, mean: 154ms Congested (six large background flows from kernel.org servers): min: 3936ms, max: 6780ms, mean: 5407ms So, max(congested) - min(uncongested) = 6640 ms My modem reported a connection speed of 54.6 Kbit/s, so 6.650 s * 54.6 Kbit/s / 8 = 45 Kbyte/s 45 Kbyte/s / 1.5 Kbyte/packet = 30 packets This was around 10-11 AM PST on 2008/10/03, using Windows XP Professional (service pack 3). The background traffic was between two and six large files from ftp://ftp2.kernel.org, http://kernel.org, and http://mirrors.kernel.org. The ISP was AT&T in Pasadena, CA. Here are my raw data: http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/ping-output/ And, in the interest of putting off real work, here is a rough plot of ping RTT vs. time: http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/ping_rtt.png Would it be worth re-running the test with smaller packets, to see if the queue size in this case is specified in bytes or packets? I think I can convince Windows to change its MTU. I was also planning on testing with hping2 (which uses TCP SYN packets instead of ICMP echos) and comparing the results. I'll do this sometime this weekend or Monday. -Tom Tom Quetchenbach wrote: > My ISP gives me dial-up access as a backup to my DSL, so I'll try to > play around with it at some point. > > -Tom > > Lachlan Andrew wrote: >> Thanks Stefan. >> >> Those numbers are interesting. I'm surprised that there was 8s delay >> when congested. I'm wondering if ping packets are treated >> differently. (Many systems give ICMP packets lower priority.) Still, >> 35 packets sounds a reasonable buffer size. >> >> Does anyone else on the list have any data to support or contradict >> this? My parents-in-law use dial-up, so I'll try to check their >> connection soon. >> >> Cheers, >> Lachlan >> >> 2008/10/2 Stefan Hirschmann : >>> Greeting Andrew and all other readers, >>> >>>> Lachlan Andrew wrote: >>>>> 2008/7/30 Stefan Hirschmann : >>>> Greetings Stefan, >>>> >>>> Thanks for your interest in the test suite. I apologise for the long >>>> delay in getting back to you. >>> I apologize for this long delay too. But it was not easy to find anyone >>> with a 56K POTS modem still in use. >>> >>> >>>>> In the "Common TCP Evaluation Suite draft-irtf-tmrg-tests-00" there is the section: >>>>> "3.2. Delay/throughput tradeoff as function of queue size" >>>>> describing the buffer sizes of the routers, but only for the access link scenario. >>>>> >>>>> I wanted to extend the values to the other scenarios and noticed a problem with it. >>>>> The BDP of the Dial-Up Link scenario is 64Kbps * 0.1 s / 8 = 0.8 KByte -> 0.8 / 1.5 = 0,53 packets. >>>>> >>>>> So even if I use the BDP the value is much too small. A rounding to one is IMHO also not realistic. What value should be used as a minimum buffer size and why? >>>> The Dial-Up scenario is there partly for POTS modems, and partly for >>>> GPRS. You should find out the buffer size used by either one of those >>>> (and then it would be great to post it to the list!). >>>> >>>> If you have access to a dial-up connection, you could try to measure >>>> the buffer size: Ping the next-hop node with an idle link, and then >>>> while downloading something large. The difference in RTTs will give a >>>> good estimate of the buffer size. >>> OK I have done it. The test were made: >>> DATE: 2008/10/01 around 19:30 >>> Used 56K POTS Provider: Tele2 Austria >>> Operating System: Windows XP Media Centre Edition >>> Large background traffic: A linux kernel image from ftp://ftp2.kernel.org >>> >>> The exact testprotocol is at the end of the email. >>> The most important datas are: >>> uncongested: >>> Minimum = 134ms, Maximum = 148ms, Mean = 141ms >>> >>> congested: >>> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >>> >>> The correct formula should be: >>> max(queuing time) = max(congested) - min(uncongested) >>> 8407 ms = 8541 ms - 134 ms >>> >>> 56 KBit/s is 7 KByte/s. 6 KByte/s is a realistic value for the real >>> usable value. In this case: >>> time * bandwidth = amount of data >>> 8,541 s * 6 KByte/s = 51,246 KByte >>> >>> If you say, that the packetsize is 1,5 KByte than: >>> 51,246 KByte / 1.5 KByte = 34,164 >>> >>> So 35 is the Queuesize in packets. >>> >>> >>> >>> Cheers Stefan >>> >>> >>> >>> Now the complete console log (was a German Windows vesion): >>> =============================================================================================== >>> Microsoft Windows XP [Version 5.1.2600] >>> (C) Copyright 1985-2001 Microsoft Corp. >>> >>> C:\Dokumente und Einstellungen\Leo>ping www.google.at >>> >>> Ping www.l.google.com [209.85.129.147] mit 32 Bytes Daten: >>> >>> Antwort von 209.85.129.147: Bytes=32 Zeit=148ms TTL=244 >>> Antwort von 209.85.129.147: Bytes=32 Zeit=146ms TTL=244 >>> Antwort von 209.85.129.147: Bytes=32 Zeit=136ms TTL=244 >>> Antwort von 209.85.129.147: Bytes=32 Zeit=134ms TTL=244 >>> >>> Ping-Statistik f?r 209.85.129.147: >>> Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), >>> Ca. Zeitangaben in Millisek.: >>> Minimum = 134ms, Maximum = 148ms, Mittelwert = 141ms >>> >>> >>> C:\Dokumente und Einstellungen\Leo>ping -w 9999 www.google.at >>> >>> Ping www.l.google.com [209.85.129.104] mit 32 Bytes Daten: >>> >>> Antwort von 209.85.129.104: Bytes=32 Zeit=7027ms TTL=244 >>> Zeit?berschreitung der Anforderung. >>> Antwort von 209.85.129.104: Bytes=32 Zeit=8541ms TTL=244 >>> Antwort von 209.85.129.104: Bytes=32 Zeit=5963ms TTL=244 >>> >>> Ping-Statistik f?r 209.85.129.104: >>> Pakete: Gesendet = 4, Empfangen = 3, Verloren = 1 (25% Verlust), >>> Ca. Zeitangaben in Millisek.: >>> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >>> >>> C:\Dokumente und Einstellungen\Leo> >>> >> >> > -- /* Tom Quetchenbach * WAN-in-Lab / Netlab, Dept of Computer Science, Caltech * 1200 E California Blvd, MC 256-80, Pasadena CA 91125 * Lab: (626) 395-8820 || Cell: (863) 370-6402 */ From: lars.eggert at nokia.com (Lars Eggert) Date: Sat, 4 Oct 2008 10:03:58 +0300 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite In-Reply-To: <48E68F6B.6080203@caltech.edu> References: <20080730103251.299310@gmx.net> <48E3C1F5.40906@gmx.at> <48E3D7EA.8030703@caltech.edu> <48E68F6B.6080203@caltech.edu> Message-ID: Hi, in case you scripted these tests, can you share that script? I'd be interested to generate some data for GSM, EDGE, 3G and 3.5G networks to share with the RG. Thanks, Lars On 2008-10-4, at 0:32, ext Tom Quetchenbach wrote: > I tried some experiments with my dial-up connection yesterday. I had > to > download several files at once to reach what seemed to be close to a > maximum delay. > > Here is a summary of my results: > > Uncongested: > min: 140ms, max: 171ms, mean: 154ms > > Congested (six large background flows from kernel.org servers): > min: 3936ms, max: 6780ms, mean: 5407ms > > So, max(congested) - min(uncongested) = 6640 ms > > My modem reported a connection speed of 54.6 Kbit/s, so > > 6.650 s * 54.6 Kbit/s / 8 = 45 Kbyte/s > 45 Kbyte/s / 1.5 Kbyte/packet = 30 packets > > This was around 10-11 AM PST on 2008/10/03, using Windows XP > Professional (service pack 3). The background traffic was between two > and six large files from ftp://ftp2.kernel.org, http://kernel.org, and > http://mirrors.kernel.org. The ISP was AT&T in Pasadena, CA. > > Here are my raw data: > > http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/ping-output/ > > And, in the interest of putting off real work, here is a rough plot of > ping RTT vs. time: > > http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/ping_rtt.png > > Would it be worth re-running the test with smaller packets, to see if > the queue size in this case is specified in bytes or packets? I > think I > can convince Windows to change its MTU. I was also planning on testing > with hping2 (which uses TCP SYN packets instead of ICMP echos) and > comparing the results. I'll do this sometime this weekend or Monday. > > -Tom > > Tom Quetchenbach wrote: >> My ISP gives me dial-up access as a backup to my DSL, so I'll try to >> play around with it at some point. >> >> -Tom >> >> Lachlan Andrew wrote: >>> Thanks Stefan. >>> >>> Those numbers are interesting. I'm surprised that there was 8s >>> delay >>> when congested. I'm wondering if ping packets are treated >>> differently. (Many systems give ICMP packets lower priority.) >>> Still, >>> 35 packets sounds a reasonable buffer size. >>> >>> Does anyone else on the list have any data to support or contradict >>> this? My parents-in-law use dial-up, so I'll try to check their >>> connection soon. >>> >>> Cheers, >>> Lachlan >>> >>> 2008/10/2 Stefan Hirschmann : >>>> Greeting Andrew and all other readers, >>>> >>>>> Lachlan Andrew wrote: >>>>>> 2008/7/30 Stefan Hirschmann : >>>>> Greetings Stefan, >>>>> >>>>> Thanks for your interest in the test suite. I apologise for the >>>>> long >>>>> delay in getting back to you. >>>> I apologize for this long delay too. But it was not easy to find >>>> anyone >>>> with a 56K POTS modem still in use. >>>> >>>> >>>>>> In the "Common TCP Evaluation Suite draft-irtf-tmrg-tests-00" >>>>>> there is the section: >>>>>> "3.2. Delay/throughput tradeoff as function of queue size" >>>>>> describing the buffer sizes of the routers, but only for the >>>>>> access link scenario. >>>>>> >>>>>> I wanted to extend the values to the other scenarios and >>>>>> noticed a problem with it. >>>>>> The BDP of the Dial-Up Link scenario is 64Kbps * 0.1 s / 8 = >>>>>> 0.8 KByte -> 0.8 / 1.5 = 0,53 packets. >>>>>> >>>>>> So even if I use the BDP the value is much too small. A >>>>>> rounding to one is IMHO also not realistic. What value should >>>>>> be used as a minimum buffer size and why? >>>>> The Dial-Up scenario is there partly for POTS modems, and partly >>>>> for >>>>> GPRS. You should find out the buffer size used by either one of >>>>> those >>>>> (and then it would be great to post it to the list!). >>>>> >>>>> If you have access to a dial-up connection, you could try to >>>>> measure >>>>> the buffer size: Ping the next-hop node with an idle link, and >>>>> then >>>>> while downloading something large. The difference in RTTs will >>>>> give a >>>>> good estimate of the buffer size. >>>> OK I have done it. The test were made: >>>> DATE: 2008/10/01 around 19:30 >>>> Used 56K POTS Provider: Tele2 Austria >>>> Operating System: Windows XP Media Centre Edition >>>> Large background traffic: A linux kernel image from ftp://ftp2.kernel.org >>>> >>>> The exact testprotocol is at the end of the email. >>>> The most important datas are: >>>> uncongested: >>>> Minimum = 134ms, Maximum = 148ms, Mean = 141ms >>>> >>>> congested: >>>> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >>>> >>>> The correct formula should be: >>>> max(queuing time) = max(congested) - min(uncongested) >>>> 8407 ms = 8541 ms - 134 ms >>>> >>>> 56 KBit/s is 7 KByte/s. 6 KByte/s is a realistic value for the real >>>> usable value. In this case: >>>> time * bandwidth = amount of data >>>> 8,541 s * 6 KByte/s = 51,246 KByte >>>> >>>> If you say, that the packetsize is 1,5 KByte than: >>>> 51,246 KByte / 1.5 KByte = 34,164 >>>> >>>> So 35 is the Queuesize in packets. >>>> >>>> >>>> >>>> Cheers Stefan >>>> >>>> >>>> >>>> Now the complete console log (was a German Windows vesion): >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> = >>>> =================================================================== >>>> Microsoft Windows XP [Version 5.1.2600] >>>> (C) Copyright 1985-2001 Microsoft Corp. >>>> >>>> C:\Dokumente und Einstellungen\Leo>ping www.google.at >>>> >>>> Ping www.l.google.com [209.85.129.147] mit 32 Bytes Daten: >>>> >>>> Antwort von 209.85.129.147: Bytes=32 Zeit=148ms TTL=244 >>>> Antwort von 209.85.129.147: Bytes=32 Zeit=146ms TTL=244 >>>> Antwort von 209.85.129.147: Bytes=32 Zeit=136ms TTL=244 >>>> Antwort von 209.85.129.147: Bytes=32 Zeit=134ms TTL=244 >>>> >>>> Ping-Statistik f?r 209.85.129.147: >>>> Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), >>>> Ca. Zeitangaben in Millisek.: >>>> Minimum = 134ms, Maximum = 148ms, Mittelwert = 141ms >>>> >>>> >>>> C:\Dokumente und Einstellungen\Leo>ping -w 9999 www.google.at >>>> >>>> Ping www.l.google.com [209.85.129.104] mit 32 Bytes Daten: >>>> >>>> Antwort von 209.85.129.104: Bytes=32 Zeit=7027ms TTL=244 >>>> Zeit?berschreitung der Anforderung. >>>> Antwort von 209.85.129.104: Bytes=32 Zeit=8541ms TTL=244 >>>> Antwort von 209.85.129.104: Bytes=32 Zeit=5963ms TTL=244 >>>> >>>> Ping-Statistik f?r 209.85.129.104: >>>> Pakete: Gesendet = 4, Empfangen = 3, Verloren = 1 (25% Verlust), >>>> Ca. Zeitangaben in Millisek.: >>>> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >>>> >>>> C:\Dokumente und Einstellungen\Leo> >>>> >>> >>> >> > > -- > /* Tom Quetchenbach > * WAN-in-Lab / Netlab, Dept of Computer Science, Caltech > * 1200 E California Blvd, MC 256-80, Pasadena CA 91125 > * Lab: (626) 395-8820 || Cell: (863) 370-6402 > */ > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1611 bytes Desc: not available Url : http://mailman.ICSI.Berkeley.EDU/pipermail/tmrg-interest/attachments/20081004/357505d4/attachment.bin From: quetchen at caltech.edu (Tom Quetchenbach) Date: Mon, 06 Oct 2008 22:41:58 -0700 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite In-Reply-To: References: <20080730103251.299310@gmx.net> <48E3C1F5.40906@gmx.at> <48E3D7EA.8030703@caltech.edu> <48E68F6B.6080203@caltech.edu> Message-ID: <48EAF6A6.40504@caltech.edu> No, I didn't write any scripts; I just did it by hand. I did write a little tiny python script (~10 lines) for generating data that can be fed to gnuplot from the Windows (XP) version of ping. That's how I made the plots. If you want it, it's here: http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/plotping.py -Tom Lars Eggert wrote: > Hi, > > in case you scripted these tests, can you share that script? I'd be > interested to generate some data for GSM, EDGE, 3G and 3.5G networks to > share with the RG. > > Thanks, > Lars > > > On 2008-10-4, at 0:32, ext Tom Quetchenbach wrote: > >> I tried some experiments with my dial-up connection yesterday. I had to >> download several files at once to reach what seemed to be close to a >> maximum delay. >> >> Here is a summary of my results: >> >> Uncongested: >> min: 140ms, max: 171ms, mean: 154ms >> >> Congested (six large background flows from kernel.org servers): >> min: 3936ms, max: 6780ms, mean: 5407ms >> >> So, max(congested) - min(uncongested) = 6640 ms >> >> My modem reported a connection speed of 54.6 Kbit/s, so >> >> 6.650 s * 54.6 Kbit/s / 8 = 45 Kbyte/s >> 45 Kbyte/s / 1.5 Kbyte/packet = 30 packets >> >> This was around 10-11 AM PST on 2008/10/03, using Windows XP >> Professional (service pack 3). The background traffic was between two >> and six large files from ftp://ftp2.kernel.org, http://kernel.org, and >> http://mirrors.kernel.org. The ISP was AT&T in Pasadena, CA. >> >> Here are my raw data: >> >> http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/ping-output/ >> >> And, in the interest of putting off real work, here is a rough plot of >> ping RTT vs. time: >> >> http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/ping_rtt.png >> >> Would it be worth re-running the test with smaller packets, to see if >> the queue size in this case is specified in bytes or packets? I think I >> can convince Windows to change its MTU. I was also planning on testing >> with hping2 (which uses TCP SYN packets instead of ICMP echos) and >> comparing the results. I'll do this sometime this weekend or Monday. >> >> -Tom >> >> Tom Quetchenbach wrote: >>> My ISP gives me dial-up access as a backup to my DSL, so I'll try to >>> play around with it at some point. >>> >>> -Tom >>> >>> Lachlan Andrew wrote: >>>> Thanks Stefan. >>>> >>>> Those numbers are interesting. I'm surprised that there was 8s delay >>>> when congested. I'm wondering if ping packets are treated >>>> differently. (Many systems give ICMP packets lower priority.) Still, >>>> 35 packets sounds a reasonable buffer size. >>>> >>>> Does anyone else on the list have any data to support or contradict >>>> this? My parents-in-law use dial-up, so I'll try to check their >>>> connection soon. >>>> >>>> Cheers, >>>> Lachlan >>>> >>>> 2008/10/2 Stefan Hirschmann : >>>>> Greeting Andrew and all other readers, >>>>> >>>>>> Lachlan Andrew wrote: >>>>>>> 2008/7/30 Stefan Hirschmann : >>>>>> Greetings Stefan, >>>>>> >>>>>> Thanks for your interest in the test suite. I apologise for the long >>>>>> delay in getting back to you. >>>>> I apologize for this long delay too. But it was not easy to find >>>>> anyone >>>>> with a 56K POTS modem still in use. >>>>> >>>>> >>>>>>> In the "Common TCP Evaluation Suite draft-irtf-tmrg-tests-00" >>>>>>> there is the section: >>>>>>> "3.2. Delay/throughput tradeoff as function of queue size" >>>>>>> describing the buffer sizes of the routers, but only for the >>>>>>> access link scenario. >>>>>>> >>>>>>> I wanted to extend the values to the other scenarios and noticed >>>>>>> a problem with it. >>>>>>> The BDP of the Dial-Up Link scenario is 64Kbps * 0.1 s / 8 = 0.8 >>>>>>> KByte -> 0.8 / 1.5 = 0,53 packets. >>>>>>> >>>>>>> So even if I use the BDP the value is much too small. A rounding >>>>>>> to one is IMHO also not realistic. What value should be used as a >>>>>>> minimum buffer size and why? >>>>>> The Dial-Up scenario is there partly for POTS modems, and partly for >>>>>> GPRS. You should find out the buffer size used by either one of >>>>>> those >>>>>> (and then it would be great to post it to the list!). >>>>>> >>>>>> If you have access to a dial-up connection, you could try to measure >>>>>> the buffer size: Ping the next-hop node with an idle link, and then >>>>>> while downloading something large. The difference in RTTs will >>>>>> give a >>>>>> good estimate of the buffer size. >>>>> OK I have done it. The test were made: >>>>> DATE: 2008/10/01 around 19:30 >>>>> Used 56K POTS Provider: Tele2 Austria >>>>> Operating System: Windows XP Media Centre Edition >>>>> Large background traffic: A linux kernel image from >>>>> ftp://ftp2.kernel.org >>>>> >>>>> The exact testprotocol is at the end of the email. >>>>> The most important datas are: >>>>> uncongested: >>>>> Minimum = 134ms, Maximum = 148ms, Mean = 141ms >>>>> >>>>> congested: >>>>> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >>>>> >>>>> The correct formula should be: >>>>> max(queuing time) = max(congested) - min(uncongested) >>>>> 8407 ms = 8541 ms - 134 ms >>>>> >>>>> 56 KBit/s is 7 KByte/s. 6 KByte/s is a realistic value for the real >>>>> usable value. In this case: >>>>> time * bandwidth = amount of data >>>>> 8,541 s * 6 KByte/s = 51,246 KByte >>>>> >>>>> If you say, that the packetsize is 1,5 KByte than: >>>>> 51,246 KByte / 1.5 KByte = 34,164 >>>>> >>>>> So 35 is the Queuesize in packets. >>>>> >>>>> >>>>> >>>>> Cheers Stefan >>>>> >>>>> >>>>> >>>>> Now the complete console log (was a German Windows vesion): >>>>> =============================================================================================== >>>>> >>>>> Microsoft Windows XP [Version 5.1.2600] >>>>> (C) Copyright 1985-2001 Microsoft Corp. >>>>> >>>>> C:\Dokumente und Einstellungen\Leo>ping www.google.at >>>>> >>>>> Ping www.l.google.com [209.85.129.147] mit 32 Bytes Daten: >>>>> >>>>> Antwort von 209.85.129.147: Bytes=32 Zeit=148ms TTL=244 >>>>> Antwort von 209.85.129.147: Bytes=32 Zeit=146ms TTL=244 >>>>> Antwort von 209.85.129.147: Bytes=32 Zeit=136ms TTL=244 >>>>> Antwort von 209.85.129.147: Bytes=32 Zeit=134ms TTL=244 >>>>> >>>>> Ping-Statistik f?r 209.85.129.147: >>>>> Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), >>>>> Ca. Zeitangaben in Millisek.: >>>>> Minimum = 134ms, Maximum = 148ms, Mittelwert = 141ms >>>>> >>>>> >>>>> C:\Dokumente und Einstellungen\Leo>ping -w 9999 www.google.at >>>>> >>>>> Ping www.l.google.com [209.85.129.104] mit 32 Bytes Daten: >>>>> >>>>> Antwort von 209.85.129.104: Bytes=32 Zeit=7027ms TTL=244 >>>>> Zeit?berschreitung der Anforderung. >>>>> Antwort von 209.85.129.104: Bytes=32 Zeit=8541ms TTL=244 >>>>> Antwort von 209.85.129.104: Bytes=32 Zeit=5963ms TTL=244 >>>>> >>>>> Ping-Statistik f?r 209.85.129.104: >>>>> Pakete: Gesendet = 4, Empfangen = 3, Verloren = 1 (25% Verlust), >>>>> Ca. Zeitangaben in Millisek.: >>>>> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >>>>> >>>>> C:\Dokumente und Einstellungen\Leo> >>>>> >>>> >>>> >>> >> >> -- >> /* Tom Quetchenbach >> * WAN-in-Lab / Netlab, Dept of Computer Science, Caltech >> * 1200 E California Blvd, MC 256-80, Pasadena CA 91125 >> * Lab: (626) 395-8820 || Cell: (863) 370-6402 >> */ >> _______________________________________________ >> Tmrg-interest mailing list >> Tmrg-interest at ICSI.Berkeley.EDU >> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > From: quetchen at caltech.edu (Tom Quetchenbach) Date: Thu, 09 Oct 2008 15:29:39 -0700 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite In-Reply-To: <48EAF6A6.40504@caltech.edu> References: <20080730103251.299310@gmx.net> <48E3C1F5.40906@gmx.at> <48E3D7EA.8030703@caltech.edu> <48E68F6B.6080203@caltech.edu> <48EAF6A6.40504@caltech.edu> Message-ID: <48EE85D3.5060404@caltech.edu> I looked into this a little bit more yesterday. I did a Wireshark packet capture at the same time as the ping, and got very different results for the TCP and ping packets. Here is a plot of the ping RTT (as before, the x-axis is really only an estimate): http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/2008-10-08/ping_rtt.png And here is what Wireshark gives me when I ask it for an RTT graph: http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/2008-10-08/wireshark_rtt.png (I started six flows, about a minute apart, by hand. The Wireshark plot is for the longest-lived flow. The ping data starts about 15 seconds before the first flow.) So, perhaps the ping results are not quite to be trusted. Has anybody else done any investigation of this? Especially interesting would be if you could use Linux's tcp_probe module to plot the TCP RTT. Unfortunately my modem only works in Windows, but I may be able to rig something up using another computer as a gateway and get some measurements this way. -Tom Tom Quetchenbach wrote: > No, I didn't write any scripts; I just did it by hand. > > I did write a little tiny python script (~10 lines) for generating data > that can be fed to gnuplot from the Windows (XP) version of ping. That's > how I made the plots. If you want it, it's here: > > http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/plotping.py > > -Tom > > Lars Eggert wrote: >> Hi, >> >> in case you scripted these tests, can you share that script? I'd be >> interested to generate some data for GSM, EDGE, 3G and 3.5G networks to >> share with the RG. >> >> Thanks, >> Lars >> >> >> On 2008-10-4, at 0:32, ext Tom Quetchenbach wrote: >> >>> I tried some experiments with my dial-up connection yesterday. I had to >>> download several files at once to reach what seemed to be close to a >>> maximum delay. >>> >>> Here is a summary of my results: >>> >>> Uncongested: >>> min: 140ms, max: 171ms, mean: 154ms >>> >>> Congested (six large background flows from kernel.org servers): >>> min: 3936ms, max: 6780ms, mean: 5407ms >>> >>> So, max(congested) - min(uncongested) = 6640 ms >>> >>> My modem reported a connection speed of 54.6 Kbit/s, so >>> >>> 6.650 s * 54.6 Kbit/s / 8 = 45 Kbyte/s >>> 45 Kbyte/s / 1.5 Kbyte/packet = 30 packets >>> >>> This was around 10-11 AM PST on 2008/10/03, using Windows XP >>> Professional (service pack 3). The background traffic was between two >>> and six large files from ftp://ftp2.kernel.org, http://kernel.org, and >>> http://mirrors.kernel.org. The ISP was AT&T in Pasadena, CA. >>> >>> Here are my raw data: >>> >>> http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/ping-output/ >>> >>> And, in the interest of putting off real work, here is a rough plot of >>> ping RTT vs. time: >>> >>> http://wil-ns.cs.caltech.edu/~quetchen/dialup-tests/ping_rtt.png >>> >>> Would it be worth re-running the test with smaller packets, to see if >>> the queue size in this case is specified in bytes or packets? I think I >>> can convince Windows to change its MTU. I was also planning on testing >>> with hping2 (which uses TCP SYN packets instead of ICMP echos) and >>> comparing the results. I'll do this sometime this weekend or Monday. >>> >>> -Tom >>> >>> Tom Quetchenbach wrote: >>>> My ISP gives me dial-up access as a backup to my DSL, so I'll try to >>>> play around with it at some point. >>>> >>>> -Tom >>>> >>>> Lachlan Andrew wrote: >>>>> Thanks Stefan. >>>>> >>>>> Those numbers are interesting. I'm surprised that there was 8s delay >>>>> when congested. I'm wondering if ping packets are treated >>>>> differently. (Many systems give ICMP packets lower priority.) Still, >>>>> 35 packets sounds a reasonable buffer size. >>>>> >>>>> Does anyone else on the list have any data to support or contradict >>>>> this? My parents-in-law use dial-up, so I'll try to check their >>>>> connection soon. >>>>> >>>>> Cheers, >>>>> Lachlan >>>>> >>>>> 2008/10/2 Stefan Hirschmann : >>>>>> Greeting Andrew and all other readers, >>>>>> >>>>>>> Lachlan Andrew wrote: >>>>>>>> 2008/7/30 Stefan Hirschmann : >>>>>>> Greetings Stefan, >>>>>>> >>>>>>> Thanks for your interest in the test suite. I apologise for the long >>>>>>> delay in getting back to you. >>>>>> I apologize for this long delay too. But it was not easy to find >>>>>> anyone >>>>>> with a 56K POTS modem still in use. >>>>>> >>>>>> >>>>>>>> In the "Common TCP Evaluation Suite draft-irtf-tmrg-tests-00" >>>>>>>> there is the section: >>>>>>>> "3.2. Delay/throughput tradeoff as function of queue size" >>>>>>>> describing the buffer sizes of the routers, but only for the >>>>>>>> access link scenario. >>>>>>>> >>>>>>>> I wanted to extend the values to the other scenarios and noticed >>>>>>>> a problem with it. >>>>>>>> The BDP of the Dial-Up Link scenario is 64Kbps * 0.1 s / 8 = 0.8 >>>>>>>> KByte -> 0.8 / 1.5 = 0,53 packets. >>>>>>>> >>>>>>>> So even if I use the BDP the value is much too small. A rounding >>>>>>>> to one is IMHO also not realistic. What value should be used as a >>>>>>>> minimum buffer size and why? >>>>>>> The Dial-Up scenario is there partly for POTS modems, and partly for >>>>>>> GPRS. You should find out the buffer size used by either one of >>>>>>> those >>>>>>> (and then it would be great to post it to the list!). >>>>>>> >>>>>>> If you have access to a dial-up connection, you could try to measure >>>>>>> the buffer size: Ping the next-hop node with an idle link, and then >>>>>>> while downloading something large. The difference in RTTs will >>>>>>> give a >>>>>>> good estimate of the buffer size. >>>>>> OK I have done it. The test were made: >>>>>> DATE: 2008/10/01 around 19:30 >>>>>> Used 56K POTS Provider: Tele2 Austria >>>>>> Operating System: Windows XP Media Centre Edition >>>>>> Large background traffic: A linux kernel image from >>>>>> ftp://ftp2.kernel.org >>>>>> >>>>>> The exact testprotocol is at the end of the email. >>>>>> The most important datas are: >>>>>> uncongested: >>>>>> Minimum = 134ms, Maximum = 148ms, Mean = 141ms >>>>>> >>>>>> congested: >>>>>> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >>>>>> >>>>>> The correct formula should be: >>>>>> max(queuing time) = max(congested) - min(uncongested) >>>>>> 8407 ms = 8541 ms - 134 ms >>>>>> >>>>>> 56 KBit/s is 7 KByte/s. 6 KByte/s is a realistic value for the real >>>>>> usable value. In this case: >>>>>> time * bandwidth = amount of data >>>>>> 8,541 s * 6 KByte/s = 51,246 KByte >>>>>> >>>>>> If you say, that the packetsize is 1,5 KByte than: >>>>>> 51,246 KByte / 1.5 KByte = 34,164 >>>>>> >>>>>> So 35 is the Queuesize in packets. >>>>>> >>>>>> >>>>>> >>>>>> Cheers Stefan >>>>>> >>>>>> >>>>>> >>>>>> Now the complete console log (was a German Windows vesion): >>>>>> =============================================================================================== >>>>>> >>>>>> Microsoft Windows XP [Version 5.1.2600] >>>>>> (C) Copyright 1985-2001 Microsoft Corp. >>>>>> >>>>>> C:\Dokumente und Einstellungen\Leo>ping www.google.at >>>>>> >>>>>> Ping www.l.google.com [209.85.129.147] mit 32 Bytes Daten: >>>>>> >>>>>> Antwort von 209.85.129.147: Bytes=32 Zeit=148ms TTL=244 >>>>>> Antwort von 209.85.129.147: Bytes=32 Zeit=146ms TTL=244 >>>>>> Antwort von 209.85.129.147: Bytes=32 Zeit=136ms TTL=244 >>>>>> Antwort von 209.85.129.147: Bytes=32 Zeit=134ms TTL=244 >>>>>> >>>>>> Ping-Statistik f?r 209.85.129.147: >>>>>> Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), >>>>>> Ca. Zeitangaben in Millisek.: >>>>>> Minimum = 134ms, Maximum = 148ms, Mittelwert = 141ms >>>>>> >>>>>> >>>>>> C:\Dokumente und Einstellungen\Leo>ping -w 9999 www.google.at >>>>>> >>>>>> Ping www.l.google.com [209.85.129.104] mit 32 Bytes Daten: >>>>>> >>>>>> Antwort von 209.85.129.104: Bytes=32 Zeit=7027ms TTL=244 >>>>>> Zeit?berschreitung der Anforderung. >>>>>> Antwort von 209.85.129.104: Bytes=32 Zeit=8541ms TTL=244 >>>>>> Antwort von 209.85.129.104: Bytes=32 Zeit=5963ms TTL=244 >>>>>> >>>>>> Ping-Statistik f?r 209.85.129.104: >>>>>> Pakete: Gesendet = 4, Empfangen = 3, Verloren = 1 (25% Verlust), >>>>>> Ca. Zeitangaben in Millisek.: >>>>>> Minimum = 5963ms, Maximum = 8541ms, Mittelwert = 7177ms >>>>>> >>>>>> C:\Dokumente und Einstellungen\Leo> >>>>>> >>>>> >>> -- >>> /* Tom Quetchenbach >>> * WAN-in-Lab / Netlab, Dept of Computer Science, Caltech >>> * 1200 E California Blvd, MC 256-80, Pasadena CA 91125 >>> * Lab: (626) 395-8820 || Cell: (863) 370-6402 >>> */ >>> _______________________________________________ >>> Tmrg-interest mailing list >>> Tmrg-interest at ICSI.Berkeley.EDU >>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest > _______________________________________________ > Tmrg-interest mailing list > Tmrg-interest at ICSI.Berkeley.EDU > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/tmrg-interest -- /* Tom Quetchenbach * WAN-in-Lab / Netlab, Dept of Computer Science, Caltech * 1200 E California Blvd, MC 256-80, Pasadena CA 91125 * Lab: (626) 395-8820 || Cell: (863) 370-6402 */ From: garmitage at swin.edu.au (grenville armitage) Date: Fri, 10 Oct 2008 13:58:36 +1100 Subject: [Tmrg] Queue size - Towards a Common TCP Evaluation Suite In-Reply-To: <48EE85D3.5060404@caltech.edu> References: <20080730103251.299310@gmx.net> <48E3C1F5.40906@gmx.at> <48E3D7EA.8030703@caltech.edu> <48E68F6B.6080203@caltech.edu> <48EAF6A6.40504@caltech.edu> <48EE85D3.5060404@caltech.edu> Message-ID: <48EEC4DC.7080908@swin.edu.au> Tom Quetchenbach wrote: [..] > Has anybody else done any investigation of this? Especially interesting > would be if you could use Linux's tcp_probe module to plot the TCP RTT. > Unfortunately my modem only works in Windows, but I may be able to rig > something up using another computer as a gateway and get some > measurements this way. You might find http://caia.swin.edu.au/tools/spp/ of interest. A small tool we put together to passively estimate RTT between two points on the network based on arbitrary flows of packets seen heading in each direction. (It also works with flows that are asymmetric, i.e. more packets/sec in one direction that the other.) Basically you capture packets at either end of your link, pass both files to SPP and out pops a sequence of RTT estimates versus time. By filtering out different application flows from your captured traffic you can isolate the RTT vs time experienced by different flows passing the same points of the network. Right now the tool's only been compiled under FreeBSD. But it reads standard pcap files so you can do the capture using whatever hosts happen to be at the points of interest in your network. cheers, gja From: lachlan.andrew at gmail.com (Lachlan Andrew) Date: Wed, 15 Oct 2008 12:54:09 +1100 Subject: [Tmrg] tmix-linux: burst of FIN packets causes packet loss at end of experiment In-Reply-To: <48F5291C.8020507@swin.edu.au> References: <48F5291C.8020507@swin.edu.au> Message-ID: 2008/10/15 Ritesh Kumar : > > On Tue, Oct 14, 2008 at 3:40 PM, Tom Quetchenbach > wrote: >> >> When a tmix experiment ends, the stop_generator() function closes >> all of the currently-active connections. This produces a large >> burst of FIN packets, which can result in a burst of packet losses. >> (This is especially true if using routers that specify their buffer >> sizes in packets.) In addition to being a somewhat unrealistic >> situation, this results in inaccurate statistics of total packet >> loss. > > Linux timers are not microsecond accurate. You might want to check > the actual time a usleep(200) sleeps. It might not be a bad idea to > sleep for larger periods of time after a given small number of > connections terminate... Good suggestion. A simpler alternative may be just to usleep(1000). > I would actually recommend to skip a portion of all your experimental > data in the beginning and the end of the experiment. Most of our > scripts have the > following assumption: The experiment lasts 4800seconds. We skip the > beginning and ending 1200seconds of experimental data. Hence it would > be worthwhile to sample the interface for packet losses at 1200seconds and 3600seconds to get a reliable set of results. This > also eliminates the possibility of the sleep between close() being > not nearly enough to save packet losses in some oddly configured > scenarios. Unfortunately, so of our statistics come from SNMP counters on routers, which are only updated every 5 seconds or so, and so we have to wait about 10 seconds after all traffic has finished before we can get reliable values. There is a fundamental statistical need to ignore the first little part, so that the number of connections can reach "steady state", but I don't know any fundamental reason to ignore the last 1/3 of the experiment, provided that traffic generator ends flows cleanly. Since our suite is very time-constrained, we want to cut out any unnecessary waiting. I'm Cc'ing this to TMRG in case someone on the list knows of a strong reason to ignore the end of an experiment (or knows a way around the SNMP problem). Cheers, Lachlan -- Lachlan Andrew Centre for Advanced Internet Architectures (CAIA) Swinburne University of Technology, Melbourne, Australia Ph +613 9214 4837 http://netlab.caltech.edu/lachlan From: lstewart at room52.net (Lawrence Stewart) Date: Thu, 16 Oct 2008 16:18:48 +1100 Subject: [Tmrg] tmix-linux: burst of FIN packets causes packet loss at end of experiment In-Reply-To: References: <48F5291C.8020507@swin.edu.au> Message-ID: <48F6CEB8.2010604@room52.net> Hi Lachlan and all, Lachlan Andrew wrote: [snip] > > There is a fundamental statistical need to ignore the first little part, > so that the number of connections can reach "steady state", but I don't > know any fundamental reason to ignore the last 1/3 of the > experiment, provided that traffic generator ends flows cleanly. Since > our suite is very time-constrained, we want to cut out any unnecessary > waiting. > > I'm Cc'ing this to TMRG in case someone on the list knows of a strong > reason to ignore the end of an experiment (or knows a way around the > SNMP problem). I certainly couldn't say that ignoring the last 1/3rd of an experiment is necessary in my experience. However, I have observed some behaviour with the FreeBSD TCP implementation which also might be relevant to other TCPs as well and might be pertinent to this discussion. Increasing the tx socket buffer size at the sender can lead to a situation where at the end of the connection, the userland process closes the socket, but the kernel finds itself with a large buffer of data still needing to be sent. Going on memory here, I recall observing that sometimes (haven't taken the time to narrow down when/why etc) some of the TCP variables can get apparently messed up e.g. cwnd can take on some unexpected values while the buffer is being flushed. I can't be more specific than that right now, but I hope to sit down and nut it out at some point. Stepping back further, one might reasonably ask why you'd need to increase the tx socket buffer to a size where this problem is possible... I noticed by trial and error that when trying to use a non-real-time OS to do traffic generation, sometimes vagaries in kernel scheduling meant that you could end up with an empty tx socket buffer for periods of time during transmission if you didn't have the buffer sized a substantial amount larger than the BDP of the path. I've worked around the issue by chopping the end off files after the time at which the traffic generation process (iperf in my case) closes the socket (normally a few seconds at most depending on the test parameters). Definitely not ideal, I know, but it works around the issue and I thought it was a story from the coal face worth sharing. Cheers, Lawrence From: ritesh at cs.unc.edu (Ritesh Kumar) Date: Thu, 16 Oct 2008 20:08:05 -0400 Subject: [Tmrg] tmix-linux: burst of FIN packets causes packet loss at end of experiment In-Reply-To: <48F6CEB8.2010604@room52.net> References: <48F5291C.8020507@swin.edu.au> <48F6CEB8.2010604@room52.net> Message-ID: On Thu, Oct 16, 2008 at 1:18 AM, Lawrence Stewart wrote: > Hi Lachlan and all, > > Lachlan Andrew wrote: > > [snip] > > > > > There is a fundamental statistical need to ignore the first little part, > > so that the number of connections can reach "steady state", but I don't > > know any fundamental reason to ignore the last 1/3 of the > > experiment, provided that traffic generator ends flows cleanly. Since > > our suite is very time-constrained, we want to cut out any unnecessary > > waiting. > > > I think its a good idea to cut down as much wait time as possible. The reason why we choose to ignore the last 1/3rd of the experiment is because Tmix logs full results (response times etc) for only the connections which get completed. So near the end, we have a lot of connections which don't finish (and we subsequently don't log results for them) though the traffic dynamics definitely get impacted by them. Hence, if you look at the connection arrival/departure process from the tmix results (which you can create using connection start times and the durations) then you will notice a ramp up _and_ a ramp down of connections. I understand that in many scenarios one may not need to follow this recommendation. However, I would instead recommend not stopping tmix at a given time but stopping it only when all connections are done. I think not giving the -d