From rem-conf-request@es.net Thu Apr  1 01:13:43 1993
From: bill@wizard.gsfc.nasa.gov (Bill Fink)
Subject: Re: Multicast experiences from a far-off corner
To: rem-conf@es.net
Date: Thu, 1 Apr 1993 03:20:12 -0500 (EST)
X-Mailer: ELM [version 2.4 PL17]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 733
Status: RO
X-Lines: 16

This was my experience today from NASA Goddard.  I had an encapsulated
tunnel directly from my system, wizard.gsfc.nasa.gov, to norad.arc.nasa.gov
via the NSI.

During the live IETF meeting, sound quality wasn't the greatest and
there were frequent dropouts of audio and video when routing information
was apparently lost.  However, the Internet Talk Radio broadcast was
clear and stable as is the replay of the IETF meeting tonight (the
latter is understandable presumably because of lower overall network
traffic).

All of the above was before a very recent change of my tunnel connection
which is now to the system mbone.nsi.nasa.gov (and happily the audio
and video feed of the IETF replay remain clear and stable).

						-Bill

From rem-conf-request@es.net Thu Apr  1 06:56:27 1993
Posted-Date: Thu 1 Apr 93 06:31:46-PST
Date: Thu 1 Apr 93 06:31:46-PST
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Test of IVS 3.0 video
To: rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@MMC.ISI.EDU>
Content-Length: 1315
Status: RO
X-Lines: 35

Folks,
	During the first working group session today, we have only
one program on the schedule.  By request, we will be sending a second
video stream of the same program using the new version 3.0 of IVS.
This will be at a lower BW than nv, and there will be only one audio
channel, so network load will be less than yesterday.  We may continue
with the IVS video during the first afternoon session, or there may be
a different video demo including color.  Another message will be sent
if so.

Important:  This is IVS 3.0, not compatible with 2.0.  See below.

							-- Steve

> To: Stephen Casner <CASNER@ISI.EDU>
> Subject: Re: Multicast experiences from a far-off corner 
> Date: Thu, 01 Apr 93 09:56:30 +0200
> From: Thierry TURLETTI <Thierry.Turletti@sophia.inria.fr>
> 
> Thank's a lot. Please could you emphasize to use the new 3.0 v of IVS,
> since this version is incompatible with previous versions.
> 
> Binaries & sources are available from avahi.inria.fr in the directory
> /pub/videoconference:
> 
> -rw-rw-r--  1 turletti  2713441 Mar 30 14:58 ivs-sun4-vfc.tar.Z
> -rw-rw-r--  1 turletti  3955929 Mar 26 11:56 ivs-dec5000.tar.Z
> -rw-rw-r--  1 turletti  1845722 Mar 29 10:01 ivs-sgi.tar.Z
> -rw-rw-r--  1 turletti   325903 Mar 30 14:59 ivs-src.tar.Z
> 
> Thanks again,
> 
> Thierry
> -------
-------

From rem-conf-request@es.net Thu Apr  1 10:49:49 1993
Date: Thu, 1 Apr 93 13:20:48 EST
From: Paul Milazzo <milazzo@diamond.bbn.com>
Subject: Color PictureWindow transmission from IETF afternoon session
To: rem-conf@es.net
Content-Length: 596
Status: RO
X-Lines: 17

We have started transmitting color video from the MOSPF session using
BBN's PictureWindow desktop videoconference tool.  To receive this
transmission, you can FTP pub/pwrx-1.3a-ietf.tar.Z from diamond.bbn.com.

This tar file contains two binaries, pwctrl and pwrx, which you should
place somewhere on your path, and a set of additions to (or a
relacement for) your ~/.sd.tcl.

If you do not wish to use sd, you can start PicWin manually by typing:

   pwctrl -displaycolor -connect 224.1.128.1

Please let us know what you think!

					Thanks,
					Paul Milazzo
					BBN Systems and Technologies

From rem-conf-request@es.net Thu Apr  1 17:28:48 1993
Date: Thu, 1 Apr 93 20:04:58 -0500
From: Joe Ragland <jrr@concert.net>
To: rem-conf@es.net
Subject: Internet Talk TV
Content-Length: 9335
Status: RO
X-Lines: 177

Article 4619 of alt.internet.services:
From: Deng_Xiaopingpingping.BEIJING@Tibet.UUCP (temp using Dollie LLama acct)
Subject: Internet Talk Television is coming to a workstation near you!
Sender: news@ftp.foo.net (NeTnEwS)
Nntp-Posting-Host: ftp.foo.net
Reply-To: /dev/null
Organization: The 501st Channel, All Flames - All The Time - Television
Followup-To: my kremvax posting of 4/1/1986
Date: Wed, 31 Mar 1993 23:59:59 GMT
Distribution: world
Approved: no
Lines: 165

 The following article is reprinted without permission from ConCoctions.
 ConCoctions is published by the Poretni Company.  More information cannot be
 obtained from the electronic mail address elo@porenti.com.

		Internet Talk Television 
	Karl MyNameIsMud (karl@television.com)

	Internet Talk Television attempts to fuse these two trends of
 gossipy newmagazine format shows with a desire to squander network
 bandwidth with abandon just because it is there to form a new type of
 publication: a news and information service about the Internet,
 distributed on the Internet.  Internet Talk Televsion is modeled on
 the Oprah and Geraldo talk shows and has a goal of providing in-depth
 technical information to the Internet community.  The service is made
 initially possible with the support of people unlike you.  Our goal is to
 provide a self-referential parody for the the Internet community
 (please note the Date: header on this posting :-).

Head: Bane of the Internet

 The product of Internet Talk Television is either a Quicktime(tm)
 movie file or 54,000 GIF files per show that require a 50 MIPs or
 greater workstation capable of displaying 30 GIF files per second,
 poorly produced and unfortunately widely available on computer
 networks (and public ftp archives where we have found directories that
 are writable by the anonymous ftp account and have a free Gig or so of
 disk space, we hide the GIF files in '...' directories.  To produce
 these files, we start with the raw data of any journalistic endeavor:
 we make things up.

 This raw information is then illustrated graphically using
 professional-quality equipment : primarily Mario Paint running on a
 Super Nintendo Entertainment System as used by a 5 year old.  The
 information is then brought back to our studio, and edited and mixed
 on a secondhand $179 Emerson 2 head VHS VCR in Super Long Play mode
 (SLP).

 The "look and feel" we strive for is akin to "Inside Edition", "Hard
 Copy", "Now It Can Be Told" or other lowest common denominator
 programs that appeal to the general interest in sensationalistic
 sleaze, scandal and gossip.

 Our goal is cover the stories that don't make it into the grocery
 tabloids for reasons of legal liability for libel, truthfulness and
 just plain good taste.  Instead of discussions of protocols, we want
 to present actual packet traces of protocols on actual networks along
 with captured passwords, SMTP dialogue showing interesting private
 email messages and in-depth interviews with convicted crackers on how
 to break DES, Kerberos, NFS, passwords, how to make a Cisco router
 go into conniption fit mode, how to create cyclic spanning tree graphs
 that loop via routing protocols, etc.

 Instead of COMDEX, we want to cover the underground Legion of Doom
 beer busts, the Phone Phreaks annual telethon, etc.

 Head: Town Adult Video Tape Rental Outlet  to the Global Village

 The result of Internet Talk Television's journalistic activities is a
 series of video image files.  The native format we start with is the
 popular GIF format, then we envision releases in JPEG, MPEG,
 PostScript, Quicktime(tm) and X Window Dump File format.  At 30 frames
 per second times 60 seconds time 30 minutes a half-hour program would
 thus consist of 54,000 GIF files.  If each GIF file is around 50k the
 entire program should use up only about 2,575 megabytes.  [I would
 start buying a bunch of 2 Gigabyte and greater SCSI drives right now]
 (By the way our advertisements will be primarily companies selling
 disk drives and other magnetic storage media devices - "You can
 archive Internet Talk Television onto our 3rd party Exabyte
 EXB-8500cs; holds 25GB compressed!" ).

 GIF files are initially spooled on FTP.FOO.NET, the central machines
 of the Alternative network.  Files are then moved over to various
 regional networks for further distribution.  For example, FOOnet, a
 commercial network provider for the Marianas Islands with service in 2
 countries, will act as the central spooling area for the Pacific
 Islands region.  The Guido Bros. trucking company will provide the
 same service for Brooklyn.

 The goal of uncoordinated distribution is to increase the load on key
 links of the network.  Transferring several megabyte files over 56kb
 and 64 kbps links will help quickly provide VP Al Gore the political
 support he needs to make NREN a reality :-)

 Files thus move from the FTP.FOO.NET central spool area, to regional
 spools, to national and local networks.  We anticipate most of this
 transfer to be done using the X 11 protocol, but some networks are
 discussing the use of Display PostScript(tm) (PostScript level 2),
 Apple Quicktime(tm) and MicroSoft Windows(tm).

 It is important to note that Internet Talk Television is the original
 copyright violator (point of illegal origination in legalese) and does
 not control the distribution.  Please make copies on videotape and
 send them to your friends.  Send a copy to Deng Xiaoping (Free
 Tibet!).  Shock your friends by transmitted frames via
 ObscurePhone(tm) (oops, I mean PicturePhone(tm)).  Make your own
 compressed HDTV 8mm tape - and in 5 years you will be able to view it
 on something.  Bring a VHS tape of the program with you to watch the
 next time you go to a sports bar with large screen projection tv (if
 you want to have several beer bottles cracked over your head.  "Hey!
 Put the game back on!").

 Head: Serial Crimes, Parallels to Television

 Once files have made their way to an individual's desktop (hopefully
 each individual will perform their own ftp to the one central
 overloaded FTP site and will waste network bandwidth as well as disk
 space by storing their own redundant versions of the files) it is up
 to the the individual user to decide how to present data.  We hope to
 see an infinite variety of different ways of having our files played
 and only list a few of the more banal methods.

 The simplest method to view a .gif file on a Sparcstation is to type
 "xv filename." (alternately "xloadimage filename" may work on some
 systems).  If the file is placed on a Network File System (NFS) file
 system some remote site's server somewhere, the user is simply going
 to have to break into that remote machine or hack SUNRPC packets to
 spoof the remote machines NFS daemon to read the remote file system
 via NFS.  Once the user has obtained the file the user copies the file
 into some other poor suckers account (on the local machine ) who left
 the permissions on their home directory wide open so that the rather
 large file doesn't show up as part of the sneaky users disk usage when
 the system administrator does a 'du' to try to find out where all of
 the disk space is rapidly disappearing to.
 
 More adventurous playing of files involves video scan convertors,
 and unlicensed low power VHF TV tranmission (do-it-yourself so-
 called pirate television stations).  This involves connecting the
 output of a SparcStation to a scan convertor (or convert the RGB
 signal from a Mac or convert the VGA from a PC) to produce a NTSC
 composite signal that can be fed into a VCR using the RCA connector
 video input.  Then have the VCR output the AUX signal in out via
 the RF adapter (commonly set to VHF channel 2, 3 or 4) and connect
 the RF output coax to a large and high VHF antenna mounted on a 
 mast high above your house.  Several of your neighbors should be
 able to pick up your signal.  You might even want to try feeding
 the signal INTO your local cable system.  The addition of a RF signal
 amplifier (which you can make using parts from Radical Shlock for just
 a few $$$) can increase your signal strength (and range) considerably.
 Caveat: Kids, don't try this at home, the FCC hasn't a large sense
 of humor.

  Head: How to obtain Internet Talk Television

  The GIF files will be available on FTP.FOO.NET ( Internet numeric
  address 127.0.0.1 ) beginning April 1, 1993 in the anonymous ftp
  subdirectory pub/television/.  Filenames begin with the frame
  number, followed by the date, followed by the extension .gif.
  Please be sure to turn on 'binary' transfer mode inside FTP.
  The GIF files holding the individual frames go from 00000 to 54000:

ftp> dir
200 PORT command successful.
150 Opening ASCII mode data connection for /bin/ls.
total 521825218252182
-rw-r--r--  1 foo      bar         50000 Apr  1 03:41 00000.040193.gif
-rw-r--r--  1 foo      bar         50000 Apr  1 03:41 00001.040193.gif
-rw-r--r--  1 foo      bar         50000 Apr  1 03:43 00002.040193.gif
			...
-rw-r--r--  1 foo      bar         50000 Apr  1 03:41 53998.040193.gif
-rw-r--r--  1 foo      bar         50000 Apr  1 03:41 53999.040193.gif
-rw-r--r--  1 foo      bar         50000 Apr  1 03:43 54000.040193.gif
226 Transfer complete.
521825218252182 bytes received in 0.5 seconds (1.9 Kbytes/s)
ftp> 

From rem-conf-request@es.net Mon Apr  5 13:13:33 1993
Date: Mon, 5 Apr 93 15:42:33 -0400
From: ejones@sdl.psych.wright.edu (Ed Jones)
To: rem-conf@es.net
Subject: Anything conferencing still going on?
Cc: ejones@sdl.psych.wright.edu
Content-Length: 1152
Status: RO
X-Lines: 23

I have some questions for the people here. I have a DECstation 5000/200 
that has a connection to the net but no audio output devices built in. 
I have a DEC 3000/400 (Alpha AXP) running OSF/1 that does have the 
built in 8K CODEC audio output. In the not so distant future I want to 
get some "real" audio device for the Alpha. Hopefully, something that
would sample at 44.1KHz or greater that would work on a TurboChannel bus.
Anyway, the Alpha does NOT have a connection to the internet outside of our 
school. There is an ethernet connection between all of my machines. 

What I want to do is this: I would like to be able to use the DECstation
to receive any audio/video teleconferencing that is occuring on the net, 
assuming there is any and that I can receive it, yet have the Alpha actually
play the audio portion through its audio port. I have AF installed and running
on the alpha.

How can I use the DECstation to recieve the teleconferences? What software
will I need? Any help will be greatly appreciated as I have no idea where
to start. Thank you.

	Ed Jones
	Psychology Department
	Wright State University
	ejones@sdl.psych.wright.edu

From rem-conf-request@es.net Tue Apr  6 11:46:12 1993
Posted-Date: Tue 6 Apr 93 11:03:58-PDT
Date: Tue 6 Apr 93 11:03:58-PDT
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Please don't start a radio session
To: rem-conf@es.net, MBONE@ISI.EDU
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@MMC.ISI.EDU>
Content-Length: 1385
Status: RO
X-Lines: 27

Folks,
	Please don't create your own radio station session to transmit
globally over the MBONE.  Feel free to make your own local session
(Scope=Site) if you have sufficient local bandwidth and your network
administrators agree.  However, there is not sufficient bandwidth on
the global MBONE to allow more than one radio session at a time.  The
Radio Free Vat session is serving that role; time slots are available
for those who want a turn at being DJ to the world.  Dave Hayes at JPL
is managing this session, using a mailing list to schedule time slots
and off-air periods to avoid conflicting with established sessions
such at the IETF meeting.  For more information, contact
vat-radio-request@elxr.jpl.nasa.gov.

	A second radio station is not going to cause immediate
overload, but I think you can imagine that this idea could catch on
and be the death of the MBONE.  Therefore, I'd like to try to keep a
limit of one radio session.  Several people have told me they think it
is ridiculous to waste network bandwidth on even one, but it serves a
useful role as a diagnostic mechanism for MBONE performance, as well
as providing some entertainment.

	Thanks for your cooperation.  Here's looking forward to the
day when real resource management mechanisms are deployed in the
network and you can send whatever you want (and are willing to pay
for).
						-- Steve Casner
-------

From rem-conf-request@es.net Wed Apr  7 04:21:09 1993
To: rem-conf@es.net
Subject: Multicast kernel extension for tadpole
Date: Wed, 07 Apr 93 12:06:08 +0100
From: Jon Crowcroft <J.Crowcroft@cs.ucl.ac.uk>
Status: RO
Content-Length: 402
X-Lines: 10


has anyone built a multicast kernel for a tadpole - we got one we
wanna carry around to meetings and take notes and plug in to mbone and
vat/nv/ivs where possible...seems to have an ether driver that aint
/dev/le0 so i guessed we'd need summat a bit different?

also, are sun going to fix solaris to do ip encapsulated multicast?

(actually, let me shorten that to "are sun gonna fix solaris":-)
jon 

From rem-conf-request@es.net Wed Apr  7 05:48:37 1993
From: smkim%gorai.kaist.ac.kr@daiduk.kaist.ac.kr (Kim Seon Man)
Subject: Problem in mrouted
To: rem-conf@es.net
Date: Wed, 7 Apr 93 21:39:14 KST
X-Mailer: ELM [version 2.3 PL11]
Status: RO
Content-Length: 1028
X-Lines: 28


I've reconfigured the kernel with MULTICASTing and MROUTE 
facilities in our SunOS 4.1.1 using "ipmulti-sunos41x.tar.Z".
I can use "vat" or "sd" successfully.
But I can't execute "mrouted".
The following message appears.

	$ mrouted -d 

	debug level 2
	mrouted $Revision: 1.3 $
	can't enable DVMRP routing in kernel: Operation not supported on socket

Please help me!!

smkim@gorai.kaist.ac.kr
-- 
=============================================================================
                        A.I.
                 CSCW    !    ITS       Seon-Man Kim
                     \  nMn  /          AI Laboratory . Computer Science Dept.
    _---_              /o o\            KAIST, Taejon, 305-701, Korea
    |\W/|             (  v  )
_---     |.            \---/            Phone : (042) 869-3557
|        ||             \ (             e-mail: smkim@gorai.kaist.ac.kr
| Xterm  || |--___+==\  /==\            	smkim@cair.kaist.ac.k
|        |. |  __|   \\/ == \		Fax   : (042) 869-3510
|----------------------------------|

From rem-conf-request@es.net Wed Apr  7 11:38:12 1993
X-Mailer: InterCon TCP/Connect II 1.1
Date: Wed, 7 Apr 1993 14:25:32 -0400
From: Bob Stratton <strat@intercon.com>
To: rem-conf@es.net
Subject: Kernel mods for the NeXT?
Status: RO
Content-Length: 366
X-Lines: 14

Hello everyone,

One of my co-workers just asked if anyone had done multicast kernel mods for 
NeXT cubes. He's running revision 3.0 of the OS, if that makes any 
difference. I haven't seen anything to this effect, but if someone's got it 
in the works, or needs a hand testing, please let me know.

Thanks,
Bob Stratton
InterCon Systems Corp.
strat@intercon.com




From rem-conf-request@es.net Thu Apr  8 00:25:49 1993
To: rem-conf@es.net
Subject: GSM encoding
Date: Wed, 07 Apr 93 15:15:02 +0530
From: { Kirtikumar Satam } <satam@saathi.ncst.ernet.in>
Status: RO
Content-Length: 563
X-Lines: 13


Can anyone out there point to the full specification of GSM encoding? Is
GSM an official name? Is there any CCITT/ANSI/ECMA standard for it?

Please reply by e-mail to satam@saathi.ncst.ernet.in

ciao,
	Satam.
---------------------------------------------------------------------
. Kirtikumar Satam . . . . . . . . . . . . e.mail: . . . . . . . . . 
. Visiting Scientist . . . . . . . . . . . satam@saathi.ncst.ernet.in
. National Centre for Software Technology, Bombay . . . . . . . . . . 
----------------------------------------------------------------------

From rem-conf-request@es.net Thu Apr  8 00:50:13 1993
Posted-Date: Thu 8 Apr 93 00:25:17-PDT
Date: Thu 8 Apr 93 00:25:17-PDT
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Re: Problem in mrouted
To: smkim%gorai.kaist.ac.kr@daiduk.kaist.ac.kr, rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@MMC.ISI.EDU>
Status: RO
Content-Length: 387
X-Lines: 10

You must have missed some step in building your kernel (or you forgot
to boot the new one :-), because the error message means not all of
the necessary code is included.

It is most likely that you forgot to add "options MULTICAST" and
"options MROUTING" in your kernel config file, or to /etc/config.
Or perhaps you forgot to add the new modules into files.cmn.

						-- Steve
-------

From rem-conf-request@es.net Thu Apr  8 10:42:12 1993
From: trannoy@berlioz.crs4.it (Antoine Trannoy)
To: rem-conf@es.net
Subject: Shared whiteboard
Date: Thu, 08 Apr 93 19:16:38 +0100
Status: RO
Content-Length: 158
X-Lines: 7



sd let us create sessions for audio, video and whiteboard... Are shared
whiteboards using multicasting existing ? If yes, where can I find them ?


Antoine

From rem-conf-request@es.net Thu Apr  8 17:12:51 1993
To: rem-conf@es.net
Subject: adding multimedia to a DECstation
Date: Thu, 08 Apr 1993 17:05:50 -0700
From: "Danny J. Mitzel" <mitzel@usc.edu>
Status: RO
Content-Length: 629
X-Lines: 17

I have access to a DECstation 5000/125 & 5000/200 that I'd like to try
adding audio/video support to.  From all the new audio/video releases
before the last IETF [ivs-3.0, nv-3.0, vat-1.56] it appears that the
DECstation is supported, but I'm unsure what hardware needs to be
added.

For video, the ivs documentation refers to 'Video TX' hardware, while
nv refers to 'PIP frame grabber,' are these the same thing?

For audio the DEC Audio File (AF) package refers to the LoFi audio
hardware.

Can anyone tell me whether this hardware is supported on the DECstation
models I referred to; part #'s?

thanks,
danny (mitzel@usc.edu)

From rem-conf-request@es.net Thu Apr  8 22:36:08 1993
To: rem-conf@es.net
Subject: IP Multicast under Ultrix 4.2
Date: Thu, 08 Apr 1993 22:18:40 -0700
From: "Danny J. Mitzel" <mitzel@usc.edu>
Status: RO
Content-Length: 300
X-Lines: 7

I have a DECstation 5000 and 3100 I'd like to try patching with the IP
multicast code.  Problem is they're running Ultrix 4.2.  Can anyone tell
me whether the ipmulticast-ultrix4.2a-binary.tar code will drop in 4.2,
or are there major differences between 4.2 and 4.2a?

thanks
danny (mitzel@usc.edu)

From rem-conf-request@es.net Tue Apr 13 10:31:03 1993
Date: Tue, 13 Apr 93 13:15:17 EDT
From: David Paul Zimmerman <dpz@cleostratus.rutgers.edu>
To: rem-conf@es.net
Subject: Cameras?
Content-Length: 636
Status: RO
X-Lines: 11

I'm looking for video input devices for a SPARCstation to play with multicast
video, and from the faq.txt file on venera.isi.edu, I'm planning to get a
bunch of VideoPix boards from Sun and CCD cameras from Stanley Howard
Associates.  However, at the last IETF, I asked my boss to check into the
actual equipment being used at the meeting, and he came back saying that he
believed various people were recommending some Panasonic camcorder over the
SHA devices.  However, neither of us can find any more information about what
he heard... particularly *which* Panasonic camcorder, and why.  Does anyone
have any leads on this?

						dp

From rem-conf-request@es.net Tue Apr 13 12:17:59 1993
Date: Tue, 13 Apr 93 15:13:56 EDT
From: Bob Clements <clements@diamond.bbn.com>
To: rem-conf@es.net
Cc: frederic@parc.xerox.com, dpz@cleostratus.rutgers.edu
Subject: Cameras?
Content-Length: 932
Status: RO
X-Lines: 26


Ron sez:
   We did in fact use camcorders at the last IETF, and probably all of the
   previous ones.

Actually, at the first one where we did video, we had one camcorder and
one "real" camera.  The camera was a Panasonic WV-3260, an industrial grade
unit.  We used it for the better quality image of the two we transmitted.

Using a good camera reduces the noise in the video, which in turn reduces
the transmission bandwidth.


   I still haven't really heard much about the quality of the new color
   camera from Stan Howard, for example... (The old one was awful!)

We had a very short demo of the new one.  Unfortunately, he didn't
leave it with us for further testing.  There was some color error --
white shirts seemed a bit pink.  But the difference from the old one
is like night and day.  Let's hope it really is as good as it seemed.

   Ron Frederick
   frederick@parc.xerox.com

Bob Clements, K1BC, clements@bbn.com


From rem-conf-request@es.net Tue Apr 13 13:12:47 1993
To: rem-conf@es.net
Subject: Re: Cameras?
Date: Tue, 13 Apr 93 12:53:40 -0700
From: berc@src.dec.com
X-Mts: smtp
Content-Length: 322
Status: RO
X-Lines: 6


We did a shoot out of a bunch of cameras & camcorders.  The clear 
winner for office lighting situations was the Sony TR-81 Hi8 camcorder.  
Generally, straight video cameras (like the Sony DX107 or Philips 
VC72505T) cost about the same as (or more than) camcorders once you 
include an auto iris lens and power supply.

From rem-conf-request@es.net Tue Apr 13 13:15:55 1993
Date: Tue, 13 Apr 93 16:05:01 EDT
From: chang@muon.nist.gov (Wo_Chang_x3439)
To: rem-conf@es.net
Subject: Re: Cameras?
Content-Length: 114
Status: RO
X-Lines: 5


Did anybody test out the Sony Handycam model TR71?
Does it produce a lot of noise?

--Wo Chang <wchang@nist.gov>

From rem-conf-request@es.net Tue Apr 13 13:34:40 1993
Date: Tue, 13 Apr 1993 16:26:05 -0400
From: oj@world.std.com (Oliver Jones)
To: rem-conf@es.net
Subject: Cameras?
Content-Length: 262
Status: RO
X-Lines: 8


Re cameras, for the sake of experimentation and/or onesy-twosey
applications, try finding an older vidicon-based color camera
for sale through want ads;  they're cheap and quite good quality
(but not as sensitive as the latest CCDs).

Ollie Jones
Vivo Software

From rem-conf-request@es.net Tue Apr 13 14:26:11 1993
Date: Tue, 13 Apr 93 17:15:32 EDT
To: rem-conf@es.net
From: Dick Cogger <R.Cogger@cornell.edu> (Richard Cogger)
Sender: rhx@132.236.199.25
Subject: Re: Cameras?
Content-Length: 1120
Status: RO
X-Lines: 28

At  2:35 PM 4/13/93 -0400, Ron Frederick wrote:
>Hi David...
>
>We did in fact use camcorders at the last IETF, and probably all of the
>
>About the only other consideration I can think of might be size. One of
>the really nice things about the Stan Howard cameras is how small and
>unobtrusive they are. I actually have both one of those and a normal
>--
>Ron Frederick
>frederick@parc.xerox.com

I agree with Ron.  For real quality, low light sensitivity, etc. you get a
lot of value in a $5-700 camcorder.  I have been testing one of the new
Howard "hi-rez color" cameras that does 270,000 pixels.  It's better than
the old monochrome ones, actually providing a more pleasing gray-scale, but
it's not a whole lot better.  The better grayscale is probably because the
monochrome camera is fairly sensitive to infra-red.  So a dark red sweater
next to a white shirt shows as very light gray, hardly contrasting with the
white.  

They tell me a higher rez monochrome is due in a while, and that may be the
good model until you want color.

-Dick Cogger, Cornell

        	       	       	       	       	       	-Dick


From rem-conf-request@es.net Tue Apr 13 15:31:51 1993
Date: Tue, 13 Apr 93 18:22:05 -0400
From: ejones@sdl.psych.wright.edu (Ed Jones)
To: rem-conf@es.net
Subject: DECstation 5000 w/ Ultrix 4.3
Content-Length: 286
Status: RO
X-Lines: 5

I have seen the kernal mods for Ultrix 4.2a but I need some for 4.3. Are
there any mods for 4.3 yet? Soon? How about OSF/1? OpenVMS AXP? I have
two Alphas, one that runs OSF/1 and one that runs VMS. I would also 
like to get these machines on the MBone. Thanks for any help. 
	Ed Jones

From rem-conf-request@es.net Tue Apr 13 17:07:39 1993
From: schoch@sheba.arc.nasa.gov (Steven Schoch)
Date: Tue, 13 Apr 1993 16:32:39 -0700
X-Mailer: Z-Mail (2.1.3 26jan93)
To: rem-conf@es.net
Subject: Solaris 2.0
Sender: schoch@sheba.arc.nasa.gov
Content-Length: 710
Status: RO
X-Lines: 16

We have a system running Solaris 2.0 and I noticed that it supports
MULTICAST so I thought I'd get sd, vat, and nv running on it.

As it turns out, the only thing that makes the nv binary incompatible are
the #define's for IP_MULTICAST_TTL and IP_ADD_MEMBERSHIP.

So I added #define IP_MULTICAST_TTL 0x11 and define IP_ADD_MEMBERSHIP 0x13
to nv.c, recompiled, and it works fine on the Solaris 2.0 system, at least
in receive-only mode.  (I had to do the compile on my 4.1.3 system and use
the Solaris 2.0 binary compatibility feature because Sun unbundles the C
compiler in 2.0).

I could do the same for sd and vat except that I can't find the source
anywhere.  Is anyone else running on Solaris 2.0?

	Steve

From rem-conf-request@es.net Wed Apr 14 11:38:35 1993
Date: Wed, 14 Apr 93 11:24:44 -0700
From: touch@ISI.EDU
Posted-Date: Wed, 14 Apr 93 11:24:44 -0700
Original-Received: by NeXT.Mailer (1.87.1)
Pp-Warning: Illegal Received field on preceding line
Original-Received: by NeXT Mailer 
                   (1.87.1)
Pp-Warning: Illegal Received field on preceding line
To: rem-conf@es.net
Subject: Re: Cameras?
Content-Length: 2759
Status: RO
X-Lines: 52

Hi,
	I noticed the flurry of mail about cameras, and figured I'd put in my  
1.38 cents (2, after taxes :-{ ).
	My application is slightly different - I used NeXTDimension boards  
for local analog video teleconferencing, sort of a "PABX" to the Internet  
digital stuff. The NeXTDimension handles, to the best of my knowledlge,  
24-bit NTSC at full frame rate, at "full" (13" TV-like) resolution.

> I checked with our Purchasing department, which connected me with a local
> place that is recommending a Panasonic WV-BL200 with a 6mm WVLA6A 

> wide-angle lens.  It's more expensive than SHA's offering, though, and it 

> is only b&w.
	Panasonic makes a color camera too - see below. The SHA cameras we  
evaluated last fall were cheaper, not just less expensive (i.e, sacrificed  
image quality for cost). They were too noisy (even the B&W), had low quality  
control (focus was uneven over the frame), and the color camera had very low  
output signal (the NeXT couldn't boost it enough digitally to use in an  
office with office lighting). We haven't seen the new cameras, but given the  
relationship between the product literature and sales talk v.s. the demo on  
the first camera....
	We also considered camcorders, and bought some Sony 8mm zoom  
(Macy/Sears/Silo/Circuit City) sales specials. They work fine, and the zoom  
is great, BUT they aren't suitable for desktop use. Their field is too big,  
and a person fills the frame from the waist up. We wanted to fill the frame  
from the shoulders up (i.e., head would be about 50% of the frame width) in  
order to permit 4:1 quad mixing of the video. If a person is too small in the  
1:1 frame, their head becomes the size of a penny in the 4:1, and you can't  
see their features well.
	For desktop teleconferencing, with the camera on the top of the  
monitor, we use one of two solutions:
	
	Panasonic GP-KS102 with 7.5mm lens
		this is a lipstick-sized camera on a cable to a box.
		costs over $2,000, but somewhat worth it in appearance
	Panasonic WV-CL320 with 6 mm lens
		this is about the size of a coke can
		costs about $750, with lens (we use auto-aperature)

	Both are color digital (CCD) cameras with NTSC output. The GP-KS102  
also has S-video out. The lenses give roughly the same field of view, because  
the CCD in the lipstick camera is slightly smaller (focal length + image  
field -> field of view ... Panasonic sent me a slide-rule gadget for  
calculating it, altough the equations are 1st-year physics).
	We have used all 3 cameras as input to Bolter, VideoPix, etc., and  
they all work fine. Even though the desktop cameras cost the same as  
camcorders, they work a little better, because their focal length is more
appropriate for us.

	Joe Touch
	touch@isi.edu

From rem-conf-request@es.net Wed Apr 14 12:06:51 1993
From: ekim@nyquist.bellcore.com (Michael Mills 21340)
Subject: Audio Clipping
To: rem-conf@es.net
Date: Wed, 14 Apr 1993 14:56:59 -0400 (EDT)
X-Mailer: ELM [version 2.4 PL2]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Length: 73
Status: RO
X-Lines: 6

Is anyone else experiencing audio clipping on National Elec....




Mike

From rem-conf-request@es.net Wed Apr 14 14:42:10 1993
Date: Wed, 14 Apr 1993 16:33:24 -0500
To: rem-conf@es.net
From: cvk@uiuc.edu (Charley Kline)
X-Sender: kline@ux2.cso.uiuc.edu
Subject: A proposal on the RTP and RTCP options, as discussed
Content-Length: 6832
Status: RO
X-Lines: 142

My apologies if this got hashed out at the final meeting of the AVT WG at
IETF; unfortunately I had to miss that one as I had a conflict with another
session.

In any event, much discussion took place about the CSRC, CDESC, and SDESC
options, the appropriateness of putting text in transport-layer headers,
and so on.

As I recall, this discussion came up as a result of possibly having to use
8-byte SIP addresses or 20-byte NSAPs in Content Source options, imposing
an uncomfortably large overhead, particularly in audio packets where the
amount of payload is already small to keep packetization delays down.

Ron Frederick asserted in the WG that as far as the Content Source option
was concerned, there is really no need to carry around network-layer
addresses at all, since all Content Source is for is to identify the
original generator of the data after it has passed through a gateway. I
suspect all this would ever be used for would be to provide some kind of
"talker indication" to give a real-time indication of who was
speaking/sending video/whatever.

As I began to work RTP into Maven, I began to believe in this more and
more, and I therefore propose the following:

Each receiver of RTP will need to keep two tables:

The Source Table merely holds human-interpreted information about a
particular data source. Specifically, it maps

        {sync-source-address, source-id} --> text

where sync-source-address is a network entity address ("socket address" to
use the current Unix IP parlance) for the synchronization source of the
data, and source-id is an arbitrary number, private to that sync source,
for a particular content source, among several of which the sync source may
be mixing. A source-id of zero is considered the default source id for RTP
data packets containing no CSRC option.

A receiver's source table is initially empty, and is filled in on receipt
of SDESC options from various sync sources.

The second table is the Content Table, which holds descriptive information
about a particular content stream. Specifically, it maps

        {sync-source-address, sync-source-content-id} --> {return-path,
        clock-quality, encoding-name, encoding-specific-data}

where sync-source-address is as above, sync-source-content-id is an
essentially arbitrary number (although standard fixed content-id's for
various audio and video encodings have been chosen), private to the sync
source, for a particular content encoding. The return-path, clock-quality,
encoding-name, and encoding-specific-data are as specified in the RTP
Internet Draft's description of the Content Description (CDESC) option.

A receiver's content table may be initially filled with default values for
the standard content id's, which may be augmented or replaced by CDESC
options from various sync sources.


In addition to these two tables, there are 4 options:

CDESC - As specified in the RTP draft, this makes or modifies an entry in
the Content Table so the particular stream can be decoded. Particularly,
the "return address" and "return port" fields should be replaced by a
single "return-network-entity-address" field containing type and length
information so that it is sufficiently flexible in non IPv4 environments.

SDESC - Slightly modified from the RTP draft, SDESC now contains only a
sync-source-chosen source-id, and a text string describing the source in
human-readable form.

SSRC - Generated by reflectors which do not also retime the RTP data
stream. SSRC contains only a network entity address specification
identifying the original sync source of the packet. In the absence of a
SSRC option, the sync source is given to be the network source of the
packet.

CSRC - Generated by gateways which retime and/or re-encode the RTP data
stream. CSRC provides a pointer to the Source Table so that the receiver
can display to the human viewer an indication of where the current stream
is originating from. In the case where a gateway is mixing two simultaneous
streams from two sources (additive audio mixing or split-screen or
"picture-in-picture" video), multiple CSRC options are provided in the RTP
packets, one for each source id. In the case where the stream does not pass
through a mixing gateway, no CSRC option is given, and the source
information presented to the user is that of source-id zero for that sync
source.


With these two tables and four options, a receiver can both correctly
decode and respond to an incoming RTP stream, because the packets will have
sync sources (or SSRC options) and content fields which the receiver can
determine the encoding for by looking in the Content Table, and can present
to the user an accurate real-time indication of which source is active,
either by the default source-id of zero or by one or more source-id's
provided by a mixing gateway.


Example:
Two audio sources are simultaneously transmitting. Each of them
periodically sends out RTCP packets containing a CDESC to describe the
content encoding and an SDESC which describes source-id zero. The RTP audio
packets generated by each of the audio sources contain the proper
content-id in their RTP headers, but need neither CSRC options (because
source-id zero is speaking), nor SSRC options (because the sync source is
identical to the network source).

If these two audio sources are mixed by a gateway, the gateway will send
out RTCP packets with a CDESC describing the resulting content and two
SDESC's, with different source-id's, each a copy of the SDESC received from
one of the original sources. Then the RTP audio generated by the mixer will
require CSRC options on each packet to indicate who is speaking, and
possibly will contain two CSRC options if both sources are active at once
and are being mixed.

If the gateway's audio stream is then retransmitted by a reflector, the RTP
packets now copy the CSRC options provided by the gateway, but must now
also include SSRC options so the receiver of the retransmitted stream can
recover the original sync source.

It is interesting to note that, as an extra optimization, the reflectors do
not necessarily need to provide SSRC options if we are willing (and their
operating systems are willing) to let them fake the network source of
packets they generate so that they appear to originate from the true sync
source.


This scheme attempts to minimize the number of overhead in the RTP header,
particularly when two or more sources are active at once through a mixer,
as well as minimize the number of network addresses that need to be carried
around by the RTP entities. It also makes implementation easier, as content
source identification becomes a simple table lookup rather than an
address-matching operation.


My apologies, this got a bit long.


--
Charley Kline, KF9FF                                cvk@uiuc.edu
UIUC Network Architect


From rem-conf-request@es.net Wed Apr 14 18:19:13 1993
Date: Wed, 14 Apr 93 17:58:41 PDT
From: ari@es.net (Ari Ollikainen)
To: rem-conf@es.net
Subject: CommWeek: IETF EYES PACKET VIDEO
Content-Length: 1055
Status: RO
X-Lines: 24


>From News Briefs, CommunicationsWeek, April 5, 1993, page 5:

		     IETF EYES PACKET VIDEO

	The Internet Engineering Task Force, which met in Columbus,
	Ohio, last week is considering forming two working groups to
	develop protocols for sending live video over packet networks.
	The groups could be formed within the next several weeks, 
	according to a task force member.


Would someone who attended the Columbus meeting provide some factual
foundation to this story? As far as I am aware, there is the AVT WG, 
the Conference Control WG_to_be, and the Remote Conferencing Architercture 
(and issues placeholder) no_longer_BOF_but_not_yet_a_WG (!).


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ari Ollikainen    ari@es.net     National Energy Research Supercomputer Center
ESnet (Energy Sciences Network)   Lawrence Livermore National Laboratory       
510-423-5962  FAX:510-423-8744   P.O. BOX 5509, MS L-561, Livermore, CA 94550  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


From rem-conf-request@es.net Wed Apr 14 18:23:18 1993
Date: Wed, 14 Apr 1993 18:15:17 PDT
Sender: Ron Frederick <frederic@parc.xerox.com>
From: Ron Frederick <frederic@parc.xerox.com>
To: rem-conf@es.net
Subject: Re: A proposal on the RTP and RTCP options, as discussed
Content-Length: 5178
Status: RO
X-Lines: 101

Excellent description, Charley! I like all of your proposed changes...

To put it in the context of what I remember happening at IETF, the
modified SDESC option would be one of the ones we would move off
to the "appendix" we talked about, as it's only reason for existing is
that we don't have any other companion protocol to transmit human
readable identifications for the streams.

All the remaining options (CDESC, SSRC, and CSRC) really are a part of
the transport protocol, if you assume that one of the jobs of RTP as a
transport protocol is to facilitate mixing & reflecting of streams while
maintaining some sort of original source identity.

The one thing we never really did resolve was how best to tie in some
external protocol which wanted to do mapping from sources to names,
once such a thing became available. That was the one place where having
a global identifier in our SDESC packets was still useful, even if that
global ID wasn't a network address.

An alternate proposal made was that this companion protocol would just
use the sync source local IDs, and I came out as against that in
Columbus. I'd like to back off from that position somewhat. After
thinking carefully about it, the idea of not requiring any sort of
global identifier to be cobbled together by RTP really appeals to me.
While it is true that there would now be a dependency between RTP and
this other protocol, it would only be one-way. An RTP implementation
which didn't know anything about this other protocol could still
interoperate just fine -- all the data could be transported in either
direction, with proper stream separation. It simply wouldn't have the
higher level name information.

So, going back to what Charley proposed, I'm going to try and fill it
out a bit...

The new CSRC option becomes simply 32 bits total:

 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|F|    CSRC     |  length = 1   | id unique within sync source  |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

The SSRC option now has to contain one of our new style network address
identifiers. I'm going to arbitrary choose a format which is a 16-bit
"type" field followed by variable length data which is type-specific.
For fixed-size addresses, the "type" field implies the length. For truly
variable-size addresses, a length field might be required after the type
field, or the length might be something which can be determined by the
option length. Once we decide what address types we want to support, we
can work out those details. Anyway, here's a layout based on this:

 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|F|    SSRC     |    length     |         address type          |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|     Variable length type-specific address data ...            |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Note that in our current case, this address would be a socket-level
address, which means it would include the UDP port number in addition
to the IP address.

The CDESC option would be almost identical to the draft. The only
change would be in the format of the return port stuff. I don't know
what the exact layout should be, but something like:

 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|F|   CDESC     |    length     |0|0|  content  | clock quality |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| MBZ           | MBZ           |     return address type       |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|     Variable length type-specific return address data ...     |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

The SDESC option would just get rid of the address field, replacing it
with only the sync source id:

 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|F|   SDESC     |    length     | id unique within sync source  |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Human readable text description of source ...                 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

I'd propose that we eliminate the FDESC option at this point. It's sort
of in the same category as SDESC, in that it doesn't really belong in
RTP at all. However, unlike SDESC, we don't really have anything out
there which wants it in the short term, and so it would be better to
just leave it out.

That's all I can think of for now. My apologies, Charley, if anything
above doesn't match what you intended. Please feel free to correct any
mistakes you see...

--
Ron Frederick
frederick@parc.xerox.com

From rem-conf-request@es.net Wed Apr 14 21:19:55 1993
Date: Wed, 14 Apr 1993 23:04:03 -0500
To: rem-conf@es.net
From: cvk@uiuc.edu (Charley Kline)
X-Sender: kline@ux2.cso.uiuc.edu
Subject: Re: A proposal on the RTP and RTCP options, as discussed
Status: RO
Content-Length: 1380
X-Lines: 35

At  6:15 PM 93/4/14 -0700, Ron Frederick wrote:
>
>That's all I can think of for now. My apologies, Charley, if anything
>above doesn't match what you intended. Please feel free to correct any
>mistakes you see...

Heck no. Perfectamundo. Thanks.

You bring up an interesting point about throwing SDESC over the fence into
conference control altogether. It's true that the locally-chosen
source-id's are not sufficient for conference control because they are not
globally unique; however the {sync-source-address, source-id} pairs are.

I'm not proposing we get rid of SDESC, simply because there is no
conference control protocol yet to do what it does, however I will note
that mentioning the functionality in the RTCP spec as a well-enough defined
API, such as

bool describe_source(struct sockaddr_in *sync_source, short source_id, char
*source_description);

char *lookup_source(struct sockaddr_in *sync_source, short source_id);

along with an appendix, as you say, which describes how this API can be
implemented as the SDESC option, puts us well on the way to being able to
push it out of RTCP altogether should we choose to later on down the line,
which is what I think a lot of people in Columbus were arguing for.


--
Charley Kline, KF9FF                                cvk@uiuc.edu
UIUC Network Architect

"...but that's all right, that's okay; just turn the page."


From rem-conf-request@es.net Thu Apr 15 15:27:18 1993
To: rem-conf@es.net
Cc: berc@SRC.DEC.COM
Subject: Re: Cameras?
Date: Thu, 15 Apr 93 18:18:13 -0400
From: Stefan_Savage@ALTAMIRA.ART.CS.CMU.EDU
Status: RO
Content-Length: 287
X-Lines: 9


> We did a shoot out of a bunch of cameras & camcorders.  The clear
> winner for office lighting situations was the Sony TR-81 Hi8 camcorder.

As another data point, we did comparisons and also bought a bunch of
TR-81's.  They can be bought for ~US$900.00. 

- Stefan
savage@cs.cmu.edu

From rem-conf-request@es.net Thu Apr 15 17:01:16 1993
Date: Thu, 15 Apr 1993 16:52:31 -0700
From: schooler@ISI.EDU
Posted-Date: Thu, 15 Apr 1993 16:52:31 -0700
To: rem-conf@es.net
Subject: Minutes for confctrl BOF at Columbus IETF
Cc: schooler@ISI.EDU, abel@thumper.bellcore.com
Status: RO
Content-Length: 15351
X-Lines: 323


These minutes were prepared by Eve Schooler@ISI from notes provided by Abel
Weinrib@Bellcore and Deborah Estrin@USC/ISI.



		    Conference Control BOF Minutes
		      26th IETF, Columbus, Ohio
			 March 30 and 31, 1993


1. Introduction and Presentations.

Two Conference Control (confctrl) BOF sessions were held at the
Columbus IETF.  The first meeting was used to provide an overview of
confctrl efforts both within and outside of the IETF.  Inside the IETF,
the confctrl group was spawned by the Remote Conferencing Architecture
(remconf) BOF.  Outside the IETF, interest in conference control,
sometimes referred to as connection management, has been ongoing for
some time.  Thus far, the confctrl mailing list has collected a sizable
bibliography containing references to many of the early and ongoing
research projects in this area.

Most of the first confctrl session was used for presentations on
different confctrl schemes.  The intent of the presentations was to
flesh out design assumptions, tradeoffs, complexity, scalability, etc.
The systems were classified according to several parameters:  whether
they (1) concentrate more on groupware conferencing (shared editors,
whiteboards) than on real-time audio/video conferencing, (2) provide
session control of packet-based real-time media vs analog real-time
media, (3) rely on centralized vs distributed session management,
and/or (4) observe loose vs tight session control.

Ruth Lang reported on CECED, the Collaborative Environment for
Concurrent Engineering Design, from SRI; Abel Weinrib focused on the
session elements and functions supported by Bellcore's Touring Machine; 
Hans Eriksson discussed the CoDesk architecture from SICS; Don
Hoffman of Sun Microsystems outlined the model used for the COCO
project; Chip Elliott of BBN presented his work on VideoTeam and the
Sticky confctrl protocol on which it relies; the versatile Multi-flow
Conversation Protocol was summarized by Lakshman K. from University
Kentucky; Eve Schooler of ISI gave an overview of the MMCC tool and its
Conference Control Protocol; and Thierry Turletti's ivs program was
discussed as a contrasting example that uses loose-style session
management.

2. Synthesis of confctrl approaches.

We used the second session to identify pervasive confctrl themes, and
to question the applicability of the various solutions to the
Internet.  The main objective was to narrow the scope of the problem en
route to the design of a generic confctrl protocol.  Our observations
were culled not only from the presentations at the IETF but also from
templates that were filled out prior to the meeting.  The templates
included Dave Lewis' write up of the UCL PREPARE project, a description 
of the ZAPT project by Joe Touch of ISI, a contribution from Jack
Jansen of CWI about the Meeting project, and Fengmin Gong's template on
the MCNC CONCERT Video Network Migration effort.

Of particular interest were implementors' comments about the aspects of
their approaches that were hard, easy, or warranted change.  Except for
a lone comment about the ease of implementation of floor control, there
were several recurrent themes regarding implementation difficulties:

    -   It is difficult to design a confctrl protocol that balances 
        simplicity with a high degree of semantic flexibility, e.g., 
	Jansen@CWI concluded that different conferencing styles require 
	entirely separate confctrl protocols.

    -   A distributed model comes with distributed system complexities:
	  o support for causality of multiway message exchanges 
	  o recovery from temporary network failures 
	  o propagation of consistent state information  
	The solutions proved to be cumbersome, unexpectedly hard and 
	often times "tricky".  
	
    - 	The underlying transport (that carries session control info) 
	comes at a price, e.g., the overhead of one RPC implementation
	led the PREPARE project to shift to a different, lighter
	implementation.

    -   There is room for improved media integration, e.g., asymmetric
	flows are difficult to characterize at setup, there is a need
	for more powerful control over presentation of media streams.

Most experimental systems either are or began as LAN-based conferencing
systems.   However, it was clear that many if not all are aiming for
WAN operation.  Although the tools that currently populate the MBONE
rely on loose-style session control, in the past most experimentation 
has taken place with tightly controlled session models -- though this is 
clearly changing.  We speculated that the predominance of tight-control 
systems may be a function of the interest in supporting *coordinated*
telecollaborations, which are readily modelled using a tight-control
framework, whereas the emergence of loose-control systems may be a
reflection of their relative ease of implementability.

Systems were clearly differentiated in their approaches to
interconnectivity among participants, both for session and for media
topology.  In certain cases, symmetry exists for N-way communication
capabilities, while in other cases conferees are asymmetrically
interconnected, relying on an initiator, moderator, filter/reflector
or a privileged set of designees to coordinate communication on
behalf of others.  Explicit vs implicit communication is another
distinguishing feature; this relates to whether or not the session has
policies attached to it, such as who dictates membership rules, the
extent to which session information is disseminated or if participant
information is meant to be kept globally coherent.  Finally, we
observed that the decision to model the system in a centralized or
distributed fashion influenced the degree of messaging synchrony and
causality.

3. Group Scope, Framework and Functional Taxonomy.

There was rough consensus on the definition of conference control as
the management and coordination of multiple sessions and their multiple
users in multiple media.  We were also in agreement that the focus of the
group is to design a "session layer" protocol to perform these functions.  
However, we debated the utility of designing a "teleconferencing"
session protocol specifically for the coordination of users' "media" 
versus designing a group negotiation protocol that is extensible to act 
as a conduit for media details.

We recognize that we cannot set out to support all conferencing
scenarios.  However we propose to support one loose style protocol (a
la Xerox PARC's nv, INRIA's ivs, BBN's dvc, LBL's vat, UMass' nevot)
and one tight style protocol (for negotiated and potentially private
sessions).  How loose and how tight?  To answer this, we must map the
list of conversation styles (from the last IETF minutes) into their
underlying confctrl session protocols.

As an example of how a tight-control approach to session management
might integrate with already existing MBONE tools, we demonstrated at
the IETF an X-based version of ISI's MMCC conference control tool.  We
used MMCC to explicitly invite a specific set of participants (vs
having a wide-open session), to distribute multicast addresses and a
shared encryption key among those participants, and to initiate as well
as tear down sessions comprised of nv, vat and/or BBN's newly released
PictureWindow.  

Although we emphasized that our goal is to design a session protocol, 
we conceded that there is a need for a common framework within which 
we can talk about conferencing control.  The framework that arose 
from discussion looked as follows:


           User A                                        User B

       +-------------+                               +-------------+
       |             |                               |             |
       | Application |                               | Application |
       |             |                               |             |
       +------+------+                               +------+------+
              |                                             |
       +------+------+                               +------+------+
       |             |                               |             |
       |   Session   |<----------------------------->|   Session   |
       |             |      "Session Protocol"       |             |
       +---+--+--+---+                               +---+--+--+---+
          /   |   \                                      /  |   \
         /   ...   \                                    /  ...   \
   +-------+     +-------+                        +-------+    +-------+
   | Media | ... | Media |<---------------------->| Media |    | Media |
   | Agent |     | Agent |     "Media Stream"     | Agent |    | Agent |
   +-------+     +-------+                        +-------+    +-------+


The premise is that the session protocol would be distributed in
nature, and would accommodate multiple user sessions (even though the
diagram depicts only two conferees).  There is a firm separation
between the session protocol and media transport protocols.  Thus, it
is immaterial whether the media transport is packet-based or analog.
Generic session state would include membership and policy information.
Application-domain specific state might include media interconnectivity
(topology) and media configuration (data encodings, rates).  Although
needing further refinment, the list of session functionality provided
to the end systems and reflected in the session protocol would
encompass:

	- Create/Destroy Session
	- Add/Delete Member
	- Set Policy
		- Who may join
		- Who may invite
		- Who may set policies
		- Etc.
	- Add/Change Application-Domain specific state
		- Media interconnectivity
		- Media configuration

	- Floor Control?
	- Prescheduling?

Polling the interest of the BOF participants, we found that 75% were
interested in solving the session protocol problem, 40% also would be
interested in defining or standardizing the media-agent-to-session-entity 
interface, and 30% were interested in configuration management issues.

4. Terminology.

It became evident that there are no set definitions for terms such as
conference, connection, session, media agents, etc.  Many of the
systems presented during the BOF and described in the templates used
these terms differently.  Thus, a confctrl terminology reference guide
needs to be developed.

We had been interchanging the phrases session control, session
management, connection control and connection management, but later
agreed that "connection" is too ambiguous since it is used at
any number of levels in the protocol stack.  We replaced connection
with the term "session", and broadly defined session as an association 
of members for control purposes.  However, it was later argued
that session looks too much like an OSI term.  The term "conference"
was also felt to be too application specific.  Therefore, the group is 
open to suggestions for a better name.

It was suggested (although not entirely resolved) that "media agents"
handle the media specifics associated with a session, "media" could be
considered any data streams that involve communication, and that floor
control is deemed the responsibility of a media agent when it concerns
a single media agent, but the responsibility of the session entity when
it requires coordination across different media agents (e.g., video to
follow audio).

We also differentiated between two meanings of configuration; the static 
end-system description, including hardware and software capabilities, 
and the per-session description.

5. Liaisons.

The confctrl group is committed to tracking the progress of
related efforts, both within and outside of the IETF.  An important
IETF linkage is to leverage off of ongoing work in the Audio/Video
Transport working group (AVT), which is nearing completion of the
Real-time Transport Protocol (RTP) specification.  During the first two
AVT sessions, there was considerable discussion about RTCP, the control
protocol associated with RTP.  Certain functions in RTCP were felt
to violate "layering"; they do not belong in the transport, but
would live comfortably within the session level, e.g., text strings of
session participants.  We will need to follow closely the outcome of
these developments, especially if certain services are assumed to percolate 
into the session layer.

The MBONE is another strategic testing ground for a confctrl solution,
although its use should not preclude use of these ideas elsewhere, nor
should these ideas be tailored specifically to the MBONE.  By
mentioning MBONE it is really meant that we expect in the long term to
have access to networks that support multicast and in the longer term
support real-time services.  The general Internet should suffice for
now.

Individuals who volunteered to track developments in related areas:

  Directory services	   Ruth Lang	       rlang@nisc.sri.com
  Multicast developments   Hans Eriksson       hans@sics.se
  Resource management/QoS  Fenming Gong        gong@concert.net
  Audio/Video transport	   Steve Casner/       casner@isi.edu
			   Eve Schooler        schooler@isi.edu
  Security		   Paul Lambert/       Paul_Lambert@email.mot.com
			   Stuart Stubblebine  stubblebine@isi.edu
  ATM			   Yee-Hsiang Chang    yhc@hpl.hp.com
  MIBs			   Peter Kirstein      kirstein@cs.ucl.ac.uk

6. Action Items.

	- Make confctrl bibliography available
	- Documentation:
		- terminology reference guide
		- refinement of functional taxonomy
		- turn minutes into issues/framework document 
		- mapping of conversation styles into session protocols
	- Collect suggestions for a group name change

7. Attendees.

	Lou Berger		lberger@bbn.com
	Monroe Bridges		monroe@cup.hp.com
	Al Broscius		broscius@bellcore.com
	Randy Butler		rbutler@ncsa.uiuc.edu
	Yee-Hsiang Chang	yhc@hpl.hp.com
	Brian Coan		coan@bellcore.com
	Dick Cogger		r.cogger@cornell.edu
	Simon Coppins		coppins@arch.adelaide.edu.au
	Dave Cullerot		cullerot@ctron.com
	Steve DeJarnett		steve@ibmpa.awdpa.ibm.com
	Ed Ellesson		ellesson@vnet.ibm.com
	Chip Elliott		celliott@bbn.com
	Hans Eriksson		hans@sics.se
	Deborah Estrin		estrin@isi.edu
	Francois Fluckiger	fluckiger@vxcern.cern.ch
	Jerry Friesen		jafries@ca.sandia.gov
	Fengmin Gong		gong@concert.net
	Ken Goodwin		goodwin@psc.edu
	Mark Green		markg@apple.com
	Russ Hobby		rdhobby@ucdavis.edu
	Don Hoffman		hoffman@eng.sun.com
	Frank Hoffman		hoffmann@dhdibm1.bitnet
	Lakshman K.		lakshman@ms.uky.edu
	Michal Khalandovsky	mlk@ftp.com
	Peter Kirstein		kirstein@cs.ucl.ac.uk
	Jim Knowles		jknowles@binky.arc.nasa.gov
	Giri Kuthethoor		giri@ms.uky.edu
	Paul Lambert		Paul_Lambert@email.mot.com
	Ruth Lang		rlang@nisc.sri.com
	Patick Leung		patrick@eicon.qc.ca
	Allison Mankin		mankin@cmf.nrl.navy.mil
	Don Merritt		don@brl.mil
	Paul G. Milazzo		milazzo@bbn.com
	Bob Mines		rfm@ca.sandia.gov
	Joseph Pang		pang@bodega.stanford.edu
	Geir Pedersen		Geir.Pedersen@ifi.uio.no
	John Penners		jpenners@advtech.uswest.com
	B. Rajagopalan		braja@qsun.att.com
	Michael Safly		saf@tank1.msfc.nsas.gov
	Eve Schooler		schooler@isi.edu
	Mike St.Johns		stjohns@darpa.mil
	Stuart Stubblebine	stubblebine@isi.edu
	Sally Tarquinio		sallyt@gateway.mitre.org
	Claudio Topolocic	topolcic@cnri.reston.va.us
	Mario Vecchi		mpv@bellcore.com
	Abel Weinrib		abel@bellcore.com
	John Wroclawski		jtw@lcs.mit.edu
	Yon-Wei Yao		yao@chang.austin.ibm.com


From braden@ISI.EDU Fri Apr 16 10:34:18 1993
Date: Fri, 16 Apr 1993 10:32:02 -0700
From: braden@ISI.EDU (Bob Braden)
To: rem-conf@es.net, ari@es.net
Subject: Re: CommWeek: IETF EYES PACKET VIDEO
Content-Length: 488
Status: RO
X-Lines: 16


  *> 
  *> 
  *> Would someone who attended the Columbus meeting provide some factual
  *> foundation to this story?

Ari,

As you probably already know, the factual foundation to this invention
by the Working Press lies in Dave Clark's BOF.  Dave talked about real time
support, and concluded that at some point it would be the subject of 
IETF working groups, and I think he suggested two working groups.

It isn't ready for prime time yet, folks.  Don't believe CommWeek.

Bob Braden

From rem-conf-request@es.net Fri Apr 16 13:01:42 1993
Date: Fri, 16 Apr 93 15:45:26 EDT
From: hgs@research.att.com (Henning G. Schulzrinne)
To: rem-conf@es.net
Subject: Conference control readings
Content-Length: 560
Status: RO
X-Lines: 14

Would it be possible for the authors that submitted templates to also
put PostScript copies of the papers/reports cited at some convenient
location? Obtaining some of the proceedings is non-trivial.

Also, there is an article in JSAC Vol. 9(9) by S. Minzer on EXPANSE, 
the Bellcore multimedia signaling protocol. Good articles on ISCP and
Q.93b would also be helpful (suggestions?)

Henning
---
Henning Schulzrinne (hgs@research.att.com)
AT&T Bell Laboratories  (MH 2A-244)
600 Mountain Ave; Murray Hill, NJ 07974
phone: +1 908 582-2262; fax: +1 908 582-5809

From rem-conf-request@es.net Fri Apr 16 14:45:33 1993
Date: Fri, 16 Apr 93 14:23:24 PDT
From: ari@es.net (Ari Ollikainen)
To: rem-conf@es.net
Subject: MPEG2 Reaches Milestone: Press Release for Syndey meeting
Content-Length: 6961
Status: RO
X-Lines: 146

With thanks to Chad Fogg (cfogg@ole.cdac.com) I'm posting this MPEG2
progress report... I like the statement that MPEG-2 "will support 
interoperability with the CCITT H.261 video telephony standard..." but 
wonder what that really means. I also wonder how many of the current 
producers of video conferencing codecs/systems are planning to implement 
MPEG-1 and/or MPEG-2.

-------------------------------

INTERNATIONAL ORGANISATION FOR STANDARDISATION
ORGANISATION INTERNATIONALE DE NORMALISATION
ISO/IEC JTC1/SC29/WG11
CODING OF MOVING PICTURES AND ASSOCIATED AUDIO

ISO/IEC JTC1/SC29/WG11  N0389
April 2, 1993
								

Source:	ISO/IEC JTC1/SC29/WG11
Title:	Press Release -- MPEG Sydney Meeting
Status:	For immediate release


Summary

This week in Sydney, at a meeting hosted by Standards Australia, the
Moving Picture Experts Group (MPEG) achieved its milestone of defining
the MPEG-2 Video Main Profile.  In its work toward developing a
multichannel Audio coding Standard, MPEG made important progress by
merging several previous proposals into a single unified proposal.  In
its work on the MPEG-2 Systems Standard, MPEG created an initial
specification for multiplexing multiple audio, video, and data streams
into a single stream for the transmission, storage, and access
requirements of many applications.

These achievements signal the convergence of such diverse industries 
as broadcast (including cable, satellite, and terrestrial),
telecommunications, entertainment, and computing to a single,
world-wide, digital video coding Standard for a wide range of
resolutions, including TV and HDTV.  MPEG confirmed that it is on
schedule to produce, by November 1993, Committee Drafts of all three
parts of its MPEG-2 Standard - Video, Audio, and Systems - for
balloting by its member countries.

To ensure that a harmonized solution to the widest range of
applications is achieved, MPEG is working jointly with the CCITT Study
Group XV "Experts Group on Video Coding for ATM Networks," as well as
representatives from other parts of CCITT, and from EBU, CCIR, and
SMPTE.

MPEG-2 Video

MPEG-2 Video is a developing International Standard which will specify
the coded bit stream for high-quality digital video.  MPEG-2 Video
builds on the success of the completed MPEG-1 Video Standard 
(ISO/IEC IS 11172-2) by additionally supporting interlaced video formats,
increased image quality, and a number of other advanced features,
including features to support HDTV.  MPEG also confirmed this week that
the MPEG-2 Main Profile will be a compatible extension of MPEG-1,
meaning that an MPEG-2 Video decoder will decode MPEG-1 bit streams.
Also, like MPEG-1, MPEG-2 will support interoperability with the CCITT
H.261 video telephony standard.

As a generic International Standard, MPEG-2 Video is being defined in
terms of extensible Profiles, each of which will support the features
needed by an important class of applications.  Among the applications
supported by the Main Profile will be digital video transmission in the
range of about 2 to 15 Mbit/ s over cable, satellite, and other
broadcast channels, enabling exciting new consumer video services.
Because the MPEG-2 Video Main Profile can be implemented at reasonable
cost using today's technology, it will be possible to introduce these
services by early 1994.  With the Main Profile now defined,
manufacturers can complete their initial MPEG-2 Video encoder and
decoder designs.  Some manufacturers expect prototypes to be
operational by mid-1993.  Another feature of the Main Profile is
support for several picture aspect ratios, including 4:3, 16:9, and
others.

The development of further profiles is already well underway. The
collaboration between MPEG and the CCIR is bearing fruit with the
definition of an hierarchical Profile, which extends the features of
the Main Profile.  This Profile is well suited to applications such as
terrestrial broadcasting, which may require multi-level coding.  For
example, this system could give the consumer the option of using either
a small portable receiver to decode standard definition TV, or a larger
fixed receiver to decode HDTV from the same broadcast signal.

MPEG-2 Audio

MPEG is developing the MPEG-2 Audio Standard for multichannel audio
coding, which will be compatible with the existing MPEG-1 Audio
Standard (ISO/IEC IS 11172-3).  MPEG-2 Audio coding will supply up to
five full bandwidth channels ( left, right, center, and two surround
channels), plus an additional low frequency enhancement channel, and/or
up to seven commentary/multilingual channels.  This week in Sydney,
MPEG merged several proposals from the November 1992 London MPEG
meeting into a unified specification.  In its audio work, MPEG is
collaborating with the CCIR to conduct subjective tests of the proposed
multichannel system.

The MPEG-2 Audio Standard will also provide improved quality coding of
mono and conventional stereo signals for bit-rates at or below 64
kbits/s, per channel.

MPEG-2 Systems

The MPEG-2 Systems Standard will specify how to combine multiple audio,
video, and private-data streams into a single multiplexed stream,
allowing for the transmission, storage, access, and retrieval of the
original streams, while maintaining accurate synchronization.  MPEG-2
Systems will be targeted at a wider range of applications than the
MPEG-1 Systems standard (ISO/IEC IS 11172- 1).  As a generic standard,
MPEG-2 Systems will support a wide range of broadcast,
telecommunications, computing, and storage applications.

To provide support for these features, the MPEG-2 Systems standard will
define two kinds of streams.  The Program Stream provides for the
creation of an audio-visual program, which could have multiple views
and multichannel audio.  It is similar to the Systems Stream of MPEG-1,
with extensions for encoding program-specific information such as
multiple-language audio channels.  The Transport Stream is new to
MPEG-2.  It multiplexes a number of programs, comprised of video,
audio, and private data, for transmission and storage using a wide
variety of media.  The Transport Stream supports multi-program
broadcast, storage of single programs on digital video tape, robust
performance against channel errors, conditional access to programs, and
the maintenance of synchronization over complex networks and through
editing operations.

Collaboration

MPEG's acceptance into the industry continues to grow.  Two hundred
thirty experts representing over one hundred organisations came
together from eighteen countries to attend MPEG in Sydney this week.
Also represented were other standards setting organisations, with
interest from bodies including EBU, ETSI, CCIR, CCITT, and SMPTE.  
The spirit of international collaboration and cooperation was evident 
by progress achieved this week.  Current and potential users of MPEG
vary from individuals to major transnational corporations.

[end of MPEG2 Syndey press release]






From rem-conf-request@es.net Fri Apr 16 18:13:32 1993
Date: Fri, 16 Apr 1993 18:01:28 -0700
From: schooler@ISI.EDU
Posted-Date: Fri, 16 Apr 1993 18:01:28 -0700
To: rem-conf@es.net, hgs@research.att.com
Subject: Re: confctrl archive [was Conference control readings]
Cc: schooler@ISI.EDU, abel@thumper.bellcore.com
Content-Length: 971
Status: RO
X-Lines: 27


The confctrl archive is located on venera.isi.edu in the
confctrl directory.  The current directory contents:

	confctrl.mail	the mail archive
	charter		the latest version of the charter
	templates	a log of all templates submitted so far
	minutes		a directory containing minutes from meetings

And imminently (it still needs some reformatting/uniformity):

	bib		confctrl bibliography

>Would it be possible for the authors that submitted templates to also
>put PostScript copies of the papers/reports cited at some convenient
>location? Obtaining some of the proceedings is non-trivial.

If you make/made the location known, I will include pointers in the 
bibliography.  Were you also suggesting a central repository?

E.

    Eve M. Schooler                       
    USC/Information Sciences Institute    Voice:  310-822-1511, x114
    4676 Admiralty Way                    FAX:    310-823-6714  
    Marina del Rey, CA 90292              E-mail: schooler@isi.edu


From rem-conf-request@es.net Mon Apr 19 05:38:25 1993
Date: Mon, 19 Apr 93 08:02:51 EDT
From: hgs@research.att.com (Henning G. Schulzrinne)
To: rem-conf@es.net
Subject: [RTP] CDESC address
Content-Length: 1225
Status: RO
X-Lines: 24

I see two possibilities for the return address in CDESC:

(1) a numeric address, with a prefix indicating what type it is. The prefix
could be a DNS RR CLASS value, as discussed in my previous message.

(2) a DNS host name in dotted notation (hgs.tempo.att.com).

The latter has the disadvantage of being somewhat longer (not than
NSAPs...), but since CDESCs are not carried in every packet, that is
not such a great problem. Numeric addresses have the potentially fatal
flaw that the sender may not know which network address the sender
needs to use in order to reach itself.  Clearly, that's currently a
non-issue: just use the IPv4 address and be done. However, imagine the
case where a SIP-host is sending out a CDESC to the multicast group.
It may well decide to put its 8-byte SIP address in that field.
However, one or more of the receivers may not have converted to SIP and
thus have no way of making use of that address information. (Short of
doing a reverse name lookup on the SIP address, and then using the host
name to get, say, an IPv4 address.) Including several network addresses
does not sound particularly attractive to me.

In short, I propose the use of domain names in the CDESC return address.

Henning

From rem-conf-request@es.net Wed Apr 21 01:44:48 1993
From: Ilka Milouchewa <ilka@prz.tu-berlin.dbp.de>
Posted-Date: Wed, 21 Apr 1993 10:27:05 +0100 (MESZ)
Received-Date: Wed, 21 Apr 93 10:27:07 +0200
Subject: RFC 1453 - XTP some comments
To: rem-conf@es.net
Date: Wed, 21 Apr 1993 10:27:05 +0100 (MESZ)
X-Mailer: ELM [version 2.4 PL13]
Content-Type: text
Content-Length: 3391
Status: RO
X-Lines: 78


As contribution to the XTP RFC 1453 discussion and to some open problems 
it may be interessant to present the 
view of the RACE project CIO (R2060) for multimedia application which
is now using XTP:  

**** Why we selected XTP as starting point for multimedia applications in CIO?

1.  XTP provides a connection oriented transport and network transmission. 

This explains the benefit to map XTP on ATM Networks and to use the possiblities
of bandwidth reservation of ATM networks.
XTP enjoys a simple mapping to ATM, in particular XTP routes map to ATM Virtual
Paths and XTP context  keys map  to Virtual Channels (VCI).

2.  XTP lets us call out new protocols with very little work.  
Different Transport services such as connection-mode, connection-less 
and transaction using different techniques for connection establishment 
and release can be provided with XTP. 

XTP provides flexible and quick connection setup/teardown.   
Very important is the fastconnect 
establishment with the FIRST PDU which allows the support of connectionless 
and transaction service 

3. Flexible error handling - XTP allows  to specify the behavior 
in the face of detected packet loss, 
i.e. the retransmission strategy or turning off the retransmission 
what is useful for multimedia applications. 
XTP also allows checksuming to be turned on or off.  

4. TSDU oriented transmission based on EOM flag. 
BTAG additionally can be used by multimedia
applications for transmitting of application dependent control data.

5.  XTP has rate based flow control.  
It also has windows but rate based flow control is more 
important for multi-media applications.  
Rate based flow control also provides a very convenient 
mechanism to implement throughput and bandwidth reservation for the QoS requests

6. XTP rtt calculation is not a matter of heuristics.  
Time stamps in CNTL PDUs are used  for  calculation
of RTT and transmission delay .

7. XTP provides a 32-bit  SORT field prioritization of connections.  
The prioritization of the connection 
is dependent on the required  QoS parameters (QoSPs).

******* About the discussed Complexity of the  XTP mechanisms : 

XTPs mechanisms are quite simple to implement and, as far as the user is
concerned, are only visible through the QoSPs which he requires from XTP.  
This QoSP can be based on OSI connection-mode or connection-less Transport 
service or some other particular transport service such as HSTP or OSI 95.

CIO defines for XTP a transport service for multimedia applications  
based on connection-mode, connectionless and transaction transmission. 
The CIO QoSP are based particulary on the OSI model, 
but some new QoSPs are defined 
to meet the requirements of the multimedia applications, 
such as reliablity class and the kind of the 
provision of the QoSP(best effort or guaranteed). 
Reliabilty Class QoS specifies the error-handling   
of TSDUs such as "Ignore corruption and loss", 
"Ignore corruption, indicate loss", "correct corruption and loss" , etc. 

There are several  running versions of XTP: 
XTP-Forum's XTP (Kernel Reference Model KRM), 
Network Xpress Virginia's XTP, 
Mentat Streams LA XTP, CIO XTPX developed by TUB and 
some other experimental versions. 
There are some more formal approaches of XTP to be seen in some 
transputer specifications and Petri-net based works.


Regards from Germany .. Ilka Miloucheva.


From rem-conf-request@es.net Wed Apr 21 06:07:20 1993
From: ola@erix.ericsson.se (Ola Carlvik)
Date: Wed, 21 Apr 93 14:55:25 +0200
To: rem-conf@es.net
Subject: PC-Multicast
Content-Length: 460
Status: RO
X-Lines: 18

Hello everyone..

Is there anyone who knows if there is or will be made any 
Multicastsupport/programs for PC's.? 

I have talked to the minister of a school here in Sweden that was wery 
interested in multicast but has based their education on 486'es.

I am not connected to any multicast mailgroup, so please connect me or 
contact me direct!

Ellemtel
SU/Ola Carlvik
Box 1505
125 25 Alvsjo
tel +46 8 727 38 76
fax +46 8 647 82 76
email ola@erix.ericsson.se

From rem-conf-request@es.net Wed Apr 21 06:10:16 1993
From: trannoy@berlioz.crs4.it (Antoine Trannoy)
To: rem-conf@es.net
Subject: Sony NeWs
Date: Wed, 21 Apr 93 14:57:37 +0100
Content-Length: 229
Status: RO
X-Lines: 7


I remember seeing once some posting on that group about the Sony NeWs machine.
I would like to know if there is an available version of a VAT compatible
software running on it. Thanks in advance for your help.


Antoine Trannoy

From rem-conf-request@es.net Wed Apr 21 11:03:38 1993
Posted-Date: Wed 21 Apr 93 10:53:53-PDT
Date: Wed 21 Apr 93 10:53:53-PDT
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Re: PC-Multicast
To: ola@erix.ericsson.se, rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@MMC.ISI.EDU>
Content-Length: 246
Status: RO
X-Lines: 6

Larry Backman of FTP Software told me that their PCTCP product
supports IP Multicast.  Assuming there is someone from FTP on
this list, I'll let them give the email address for customer
requests (something at ftp.com).
						    -- Steve
-------

From rem-conf-request@es.net Wed Apr 21 14:48:45 1993
From: schoch@sheba.arc.nasa.gov (Steven Schoch)
Date: Wed, 21 Apr 1993 14:38:35 -0700
X-Mailer: Z-Mail (2.1.3 26jan93)
To: rem-conf@es.net
Subject: Sparcstation audio
Sender: schoch@sheba.arc.nasa.gov
Content-Length: 209
Status: RO
X-Lines: 5

We're buying some audio equipment to connect to a SPARCstation and we need
to know the impedance and voltage levels on both the input and output jacks
of the sparcstation.  Does anyone have that info?

	Steve

From rem-conf-request@es.net Thu Apr 22 12:21:41 1993
Date: Thu, 22 Apr 93 15:05:27 EDT
To: rem-conf@osi-west.es.net
From: Dick Cogger <R.Cogger@cornell.edu> (Richard Cogger)
Sender: rhx@132.236.199.25
Subject: New CU-SeeMe, GSH Project-- on the mbone
Cc: Global School House Project <gsh@nic.cerf.net>,
        David Lambert <hdl2@cornell.edu>,
        "M. Stuart Lynn" <msl%cornella.BITNET@bitnet-mail-gw.cornell.edu>
Status: RO
Content-Length: 9730
X-Lines: 177

Rem-conf Folks,
        A new version of Cornell's Mac video, CU-SeeMe, is available for
anonymous ftp.  The new version, CU-SeeMe0.40 has a number of improvements,
most significant among them being support of multi-party conferencing.  The
multi-party conferencing is implemented by use of a "reflector" program
that runs (so far) on a Sun Sparc system, but should port easily to other
unix platforms.  Each Mac opens a connection to the reflector, and sends
video which is repeated to each other Mac in the conference.  A Mac may
choose to be only a sender, only a receiver, or both.
        This support will be used next Wednesday, April 28th, to support
the NSF's Global School House project, involving a conference among four
grade school classes, three in various parts of the U. S. and one in the U.
K.  Look for a full description of this event on rem-conf sometime tomorrow
from Carl Malamud.  
        Also, look for an announcement soon from Ron Frederick of a new
version of nv, the unix-based network video package form Xerox PARC with a
CU-SeeMe decoder included.  The latest version of the reflector provides an
option to multicast the video streams it receives, using RTP headers.  We
are just starting testing, but assuming a positive result, we plan to
multicast the GSH conference on the mbone.  Audio for the GSH conference
will be handled by a telephone bridge, but we have tested and plan to patch
the audio into VAT and send it onto the mbone as well.  Please be aware
that as much as we want to supply this event to the mbone, priority has to
go to good functioning of the conference itself.  Various rehearsals and
tests will be ongoing between now and the actual event, scheduled for 10:30
EDT Wednesday.  Look for announcemnts in sd or further messages here on
rem-conf.
        Below is a readme file which tells where to get the CU-SeeMe
program and the reflector.  The version of the reflector on the ftp server
now doesn't support multicast or inter-reflector links, but subsequent
versions should be posted later today or tomorrow.
        To experiment with CU-SeeMe, refer to the readme, below.
        To observe GSH trials and actual event, you will need nv 3.1, due
out soon.

-Dick Cogger

-----------------------ReadMe follows----------------------

README file
Cornell Video "CU-SeeMe0.40"
4/7/93.

Cornell University's Information Technology orgranization (CIT) has
developed a Macintosh videoconferencing program called CU-SeeMe. It
displays 4-bit grayscale windows at either 320x240 or half that diameter,
160x120, and does not (yet) include audio. CU-SeeMe in version 0.40
provides a one-one connection or, by use of a "reflector," a one-many,
several-to-several, or several-to-many conference. Each participant can
decide whether to be a sender, a receiver, or both. Receiving requires only
a Mac with a screen capable of displaying 16 grays and a connection to the
Internet. Sending requires the same plus a SuperMac VideoSpigot board, a
camera, Quicktime and SpigotVDIG extensions added to the system folder.
Although much improved over earlier versions, this is still BETA software--
use at your own risk. And please treat the Internet kindly-- keep bw limits
set down under 100kbps unless you know where you're putting network load. 

Questions, comments, additional info, please email or phone: 

Dick Cogger (the management sponsor), r.cogger@cornell.edu 607-255-7566 
Scott Brim (the network consultant), scott-brim@cornell.edu 607-255-9392 
Tim Dorcey (the CU-SeeMe programmer), tim.dorcey@cornell.edu 607-255-5715 
John Lynn (the reflector programmer), john.lynn@cornell.edu 607-255-7341 

The software is freely available via anonymous ftp from gated.cornell.edu
in the directory /pub/video as CU-SeeMe0.40.bin.  This file is
README.CU-SeeMe.txt. There is also a choice of VDIG files needed for use
with the SuperMac VideoSpigot frame grabber board. The executable and VDIG
files are stored in a MacBinary II format. It is most convenient to use
Fetch2.1 to retrieve the files as it will automatically unpack them. 

Specifications to RECEIVE video:
- Macintosh platform only with a 68020 processor or higher 
- System 7 and higher operating system (it "may" run on system 6.07 and above) 
- ability to set your monitor to 16grayscale - an IP network connection
- MacTCP
- CU-SeeMe0.40 file (file size is approximately 28K) 

Specifications to SEND video:
- The specifications to receive video mentioned above 
- Video Spigot hardware (street price is approximately $380.) 
- camera with NTSC 1vpp output (like a camcorder) and RCA cable. 
- Quicktime installed (requires approximately 2mg memory) 
- SpigotVDIG Quicktime component (driver) On disk approx. 300kb.) 

To operate CU-SeeMe

1. Be sure the screen is switched to 16 grays with the Monitors control panel. 

2. Launch CU-SeeMe0.40. You will be asked for a name and your sending and
receiving default preference. Then if you see a video window on the top
left of the screen, the program believes you have a VideoSpigot installed
with the Quicktime extension and the SpigotVDIG component. If you see
yourself, you have a camera operating. If you get only a menu-bar, you are
in receive-only mode. 

3. Either way, to receive you need to pull Open from the Connection menu
and type in an IP address (dot notation, no DNS,sorry) of a Spigot-equiped
mac running CU-SeeMe0.40 (Earlier versions NOT compatible), or of a
reflector. If all is well, and no one else is connected to the other end
(if you're calling another Mac), it will start sending to you. If you are
calling a reflector, you may be the only one connected, in which case you
will see no windows until someone else connects. If multiple folks connect,
you will get a window for each sender, up to a limit of 8. If there's no
answer, you'll get a connection failure message. 

4. By default, framerate and bandwidth used are displayed in a bar below
each video window. You can turn off the bar with Transmission Rate in the
Local Video menu or Reception Rate from the Remote Video menu. 

5. If you have a spigot-equiped Mac running the program, waiting for a
request to send, the bar under the local window will show framerate only
until someone requests (or you open a connection) and you start sending--
then you will also see an indication of bandwidth. (You can't tell who,
though if they don't send.) On the right end of the bar under the local
window is shown a "cap" which limits bandwidth used for sending and hence
framerate, depending on amount of motion and which can initially be set by
a dialog you can open as Display Controls on the Local Video Window. If the
other end of a connection reports packet loss, the cap will be lowered and
it will go back up if loss reports stop. 

6. Each time a remote video window (or internal connection) opens or is
resized, you will see first a middle-gray field and then an impromptu demo
of the frame differencing and aging algorithms. Only changed 8x8 pixel
squares are sent, except that if a square remains unchanged for a number of
frames, it is sent anyway to heal any results of lost packets. Initially,
the "age" of each square within the Refresh Interval is set to a random
number, so the window will fill in gradually or as the subject moves. You
can adjust the Refresh Interval in the Controls Dialog. 

7. By default, the smaller size frames 160x120 are grabbed and displayed.
You can choose the double sized ones 320x240 in the Local Video menu. The
Spigot is much faster at producing the smaller ones. Whichever size you
grab (and transmit), you can display at either size. You can also display
remote windows at either size. If you are getting the small size and
displaying the large, three quarters of the pixels are generated by simple
linear interpolation, and it's amazing it works as well as it does. The
zoom box (upper right in title bar) on the window will toggle between large
and small display. 

8. In the Local Video menu you can choose to have yourself shown as in a
mirror image which makes it easier for some folks to position themselves in
the frame, etc. 

9. Also in the Local Video menu is an option to choose More Frames or
Better Sync. The VDIG for the Spigot board maintains a fifo of frames,
three frames deep, so each frame you grab is three frames old. Each time
the program grabs a frame, the vdig gets another one and puts it at the far
end of the fifo. When you're going 30 fps, a 3-frame delay is not a big
problem, but at 10 or fewer fps, the delay becomes significant in terms of
lip-sync with telephone audio. When you choose Better Sync, the program
grabs a frame and throws it away and grabs another one to use, thus
shortening the fifo. There are differing opinions about whether More Frames
or Better sync provides a more useful image. If the bandwidth limit is
going to keep you down close to 10fps anyway, you almost surely do better
to turn on Better Sync. Same thing if your CPU is a cx or less and is
limiting your framerate because of, for example, receiving several streams.


To operate the CU-SeeMe Reflector:

1. Obtain software -- a tar file on gated.cornell.edu /pub/video as
reflect.v3.tar. Untar and install in the usual way on a Sun Sparc with a
good Internet connection. 

2. Issue the unix command: "reflect" and then open connections to the Sun
from Mac's running CU-SeeMe0.40. 

3. If you issure "reflect -r" each Mac will receive a copy of its own video
stream as well as any other streams. 

4. Soon we will put the sources for the reflector up so folks can try
porting to other platforms. Anything with Berkeley networking should be OK,
but we did find an incompatibility with the sockets implementation in AIX
for the RS-6000. 
 


From rem-conf-request@es.net Thu Apr 22 18:03:03 1993
Posted-Date: Thu 22 Apr 93 17:40:35-PDT
Date: Thu 22 Apr 93 17:40:35-PDT
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Re: Sparcstation audio
To: SCHOCH@sheba.arc.nasa.gov, rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@MMC.ISI.EDU>
Status: RO
Content-Length: 2937
X-Lines: 67

I do not have specs for the impedance and voltage levels of the
SPARCstation audio input and output, but I have connected the ISI
teleconference room audio system to our SPARCstation audio, and in
doing so, I have learned a few tips that I might pass along.  I began
by purchasing several adapters at my neighborhood Radio Shack.

To connect professional audio gear using balanced lines and XLR
connectors, I used the following series of adapters from XLR audio to
the SPARC microphone input:

    274-016 Adapter/transformer - converts XLR jack to 1/4" phone plug
        and transforms the low impedence of the XLR system to the high
        impedence the SPARC wants on its microphone input.

    274-389 Adapter - 1/4" phone jack to RCA phono plug.

    274-300 Signal reducer - RCA phono jack to 1/8" mono phone plug,
	with 40dB signal reduction (100K:1K) to get down to microphone
	level.  Or, if your signal is already at microphone level, use
	the 274-330 adapter in place of the signal reducer.  The
	reducer, in combination with a microphone level setting of 44
	got me the level I wanted.

	More recently, I wanted to adapt the output of a consumer VCR
	to the input of the SPARCstation and found that 40dB was too
	much attenuation (the microphone level had to be set too high,
	causing noise).  So, I constructed my own 24dB attenuator
	(15K:1K).	

In the other direction, the first caution is that the 1/8" plug you
insert into the headphone output of the Sun adapter cable MUST BE
STEREO.  If you plug in a 1/8" mono plug, it shorts the output.  You
could use a variety of adapter combinations depending upon what is in
stock, but I used the following:

    274-374 Adapter - 1/8" stereo phone plug to 1/8" mono phone jack
	to satisfy the stereo plug requirement above.

    42-2157 6' Cable - 1/8" mono phone plug to 1/4" mono phone plug.

    274-017 Adapter/transformer - converts from a 1/4" phone jack to
	XLR plug and transforms from high impedence to low impedence
	for the XLR system.

One problem with this combination is that the signal level gets
reduced as it goes through the transformer in the 274-017 adapter.
You can't get enought output from the SPARC to make up for this loss
because 54 is the highest output level setting you can use on the
SPARC to avoid clipping in the SPARC's output amplifier if a maximum
amplitude digital signal is played.  However, the low-level signal
works fine if you can plug it into a microphone-level input on a mixer
for amplification.

If you don't have a mixer that can take a microphone-level input, as
an alternative you can make the following cable to hook the unbalanced
signal directly to the balanced system:

    1/8" stereo                              XLR
     phone plug                              plug

	   tip --+-------------------------- 2 signal + 
	middle -/
					  /- 3 signal -
	  ring --------------------------+-- 1 shield

							-- Steve
-------

From rem-conf-request@es.net Fri Apr 23 16:19:43 1993
Date: Fri, 23 Apr 1993 16:04:48 -0700
From: schooler@ISI.EDU
Posted-Date: Fri, 23 Apr 1993 16:04:48 -0700
To: rem-conf@es.net
Subject: Confctrl meeting using MBONE
Cc: schooler@ISI.EDU, abel@thumper.bellcore.com
Content-Length: 654
Status: RO
X-Lines: 24


The Conference Control BOF that met at the IETF plans to hold
a follow-on telemeeting over the MBONE.


    Date:
	Wednesday, April 28, 1993

    Time:
	0800-1000 PDT/ 1100-1300 EDT/ 1600-1800 UTC

    Topics for Discussion: 
	Report on charter
	Revised framework
	Mapping of conversation styles into session protocols
	Terminology reference guide
	Refinements to functional taxonomy
		
    Participation:
	We will advertize the session as "Confctrl Meeting"
	using the LBL sd program [v1.11 or later], which in turn will
	invoke vat [v1.56 or later].  For detailed information about MBONE 
	participation, see the file venera.isi.edu:mbone/faq.txt.  


From rem-conf-request@es.net Sat Apr 24 09:56:26 1993
Date: Sat, 24 Apr 1993 12:34:19 -0500
To: schooler@isi.edu, rem-conf@es.net
From: Scott_Brim@cornell.edu
X-Sender: swb@nr-tech.cit.cornell.edu
Subject: Re: Confctrl meeting using MBONE
Cc: abel@thumper.bellcore.com
Status: RO
Content-Length: 1149
X-Lines: 22

Eve, as Dick said on rem-conf and I mentioned on the mbone list, the NSF is
holding a "global schoolhouse" event Wednesday morning from 10:30-12:00 EDT
(more or less).  There will be four grade schools, some high-level
government people, probably a senator or two, and corporate people either
participating or observing.  Ron Frederick just about has vat 3.1 ready,
which will decode the video we are using, so in addition to sending video
and audio to the participating sites we would like to put it out on the
mbone.  Clearly I'm a bit concerned about the overlap and what it will do
to the Internet.  I know it's asking a lot of the European and Australian
participants, but could you possibly start later?  There is no way I can
change the Global Schoolhouse given the above list of invitees.
                                               Hopeful thanks ... Scott

At 19:04 4/23/93 -0400, schooler@ISI.EDU wrote:
  >The Conference Control BOF that met at the IETF plans to hold
  >a follow-on telemeeting over the MBONE.
  >
  >    Date:
  >        Wednesday, April 28, 1993
  >    Time:
  >        0800-1000 PDT/ 1100-1300 EDT/ 1600-1800 UTC


From rem-conf-request@es.net Sat Apr 24 13:56:57 1993
Date: Sat, 24 Apr 93 16:59:56 EDT
From: carl@malamud.com (Carl Malamud)
To: rem-conf@es.net
Subject: Global Schoolhouse Project
Org: Internet Talk Radio
Status: RO
Content-Length: 4437
X-Lines: 96

For the past ten weeks, a team of volunteers from 30
organizations have donated time, money, equipment, software, and
bandwidth to make the National Science Foundation's Global
Schoolhouse Project a reality.  This note briefly explains the
project and its significance.

For the past six weeks, schoolchildren in grades 5-8 have been 
conducting original research on the environment in their 
communities.  With the help of a curriculum developed by the 
FrEdMail Foundation, they have conducted surveys and tests, 
have prepared videotapes and other materials, and have read 
Vice President Gore's "Earth in the Balance."  The children 
are located in schools in Oceanside, California; Knoxville, 
Tennessee; Arlington, Virginia; and London, England.

Using the Internet, the children have been exchanging messages
with each other using FrEdMail.  They have also been using
Cornell University's CU-SeeMe videoconferencing software and
Sprint audioconference bridges to communicate with each other.

On April 28th, they will conduct a videoconference on the
Internet to brief each other and national leaders on what can be
done about the environment.  Several prominent leaders have been
invited to participate, and a variety of dignitaries and members
of the media have been invited to observe.

Technically, the April 28 videoconference consists of CU-SeeMe
running on Macintosh computers donated by Apple equipped with a
camera.  CU-SeeMe sends a video stream to a Sparcstation donated
by Sun which acts as a central reflector, sending the video from
one site to the other sites participating in the conference. 
Xerox Parc has modified the NV software to read CU-SeeMe streams,
allowing the April 28 videoconference to be rebroadcast to the
MBONE.  (Note that the 4-site conference is our top priority and
if we sense network problems, the MBONE link will be cut.)

Each of the schools has been equipped with a local network, with
all of the resources donated or furnished on long-term loan. 
This equipment includes Cisco routers, Cayman Gator boxes, David
System UTP hubs.  The network connectivity for the Global
Schoolhouse has been furnished by SprintLink, CERFnet, the
NSFNET, ICMnet, Suranet, Metropolitan Fiber Systems, Pacific
Bell and Bell Atlantic. Local loop connectivity uses either 
T1 lines or SMDS.

Each of the schools has a teacher or group of teachers that has
worked hard on the curriculum and on using the technology. 
Working alongside these teachers have been Internet mentors.  We
are grateful to CERFnet, SNMP Research, the University College
London, and Dave Staudt of NSF for taking time out of their
schedules to work with these schools.

It has been remarkable to see how all these organizations have
pitched in to give children the opportunity to use the network as
part of their education.  The National Science Foundation is
contemplating expanding the project in future years to include
additional classrooms, other guests, and further advanced and
improved technologies.

There are several lessons that can be learned from the Global
Schoolhouse.  First, affordable (though not yet cheap) technology
is available that allows K-12 groups to join the Internet. 
Second, business/government/university partnerships can be a
valuable tool for bringing connectivity to new groups.  Third,
because we have a general-purpose infrastructure in the Internet,
we were able to very quickly make this project happen.

The list of vendors mentioned here is not exhaustive and is meant
only to illustrate the breadth of sponsorship.  The summary of
this project should not be taken to reflect the official views of
the National Science Foundation or any of the project sponsors.

My organization, Internet Talk Radio, sees events like the Global
Schoolhouse Project as the beginning of an Internet Town Hall, a
place where national and international leaders and citizens can
hold a continuing dialogue.  Internet Talk Radio, in cooperation
with other organizations, will be placing a 10 Mbps link from the
Internet into the National Press Club in Washington, D.C.  We
hope that this centrally-located site will be a place where we
can bring our leaders onto the network to talk to us on a regular
basis.

For more information:

	About NSF: dmitchel@nsf.gov

	About CU-SeeMe: r.cogger@cornell.edu

	About FrEdMail: alrogers@cerf.net

	About This Message: carl@radio.com

Regards,

Carl Malamud
Internet Talk Radio

From rem-conf-request@es.net Sat Apr 24 16:48:48 1993
To: rem-conf@es.net
Subject: nv and Parallax XVideo
Date: Sat, 24 Apr 1993 16:42:46 -0700
From: "Danny J. Mitzel" <dmitzel@whitney.hitc.com>
Status: RO
Content-Length: 133
X-Lines: 5

I was wondering if anyone has a port of the nv code to the Parallax XVideo
card on the Sun?

thanks,
danny (dmitzel@whitney.hac.com)

From rem-conf-request@es.net Sun Apr 25 21:53:23 1993
To: rem-conf@es.net
Subject: Literature on adaptive playout?
Date: Sun, 25 Apr 1993 21:05:21 -0700
From: "Danny J. Mitzel" <mitzel@usc.edu>
Content-Length: 222
Status: RO
X-Lines: 6

Can anyone give me pointers to literature on adaptive playout
mechanisms for real-time media, such as that used by vat conference
mode or other audio/video applications across the Internet?

thanks,
danny (mitzel@usc.edu)

From rem-conf-request@es.net Sun Apr 25 22:46:09 1993
Date: Sun, 25 Apr 1993 22:08:50 -0700
From: schooler@ISI.EDU (Eve Schooler)
To: rem-conf@es.net
Subject: Re: Confctrl meeting using MBONE
Cc: P.Kirstein@cs.ucl.ac.uk, abel@thumper.bellcore.com, carl@malamud.com,
        schooler@ISI.EDU, scott_brim@cornell.edu, swb@nr-tech.cit.cornell.edu
Content-Length: 1175
Status: RO
X-Lines: 40

Oops.

Although I was aware of the RARE meeting being held next week
on MBONE, I had not written down the time/date of the Global Schoolhouse 
Project; the suggested time for confctrl would overlap somewhat 
with it.  Sorry 'bout that.  We can postpone the confctrl meeting to
avoid net overload.  Stay tuned for another time and/or date....

E.

>From schooler@ISI.EDU Fri Apr 23 16:05:04 1993
>Date: Fri, 23 Apr 1993 16:04:48 -0700
>To: rem-conf@es.net
>Subject: Confctrl meeting using MBONE
>
>The Conference Control BOF that met at the IETF plans to hold
>a follow-on telemeeting over the MBONE.
>
>
>    Date:
>	Wednesday, April 28, 1993
>
>    Time:
>	0800-1000 PDT/ 1100-1300 EDT/ 1600-1800 UTC
>
>    Topics for Discussion: 
>	Report on charter
>	Revised framework
>	Mapping of conversation styles into session protocols
>	Terminology reference guide
>	Refinements to functional taxonomy
>		
>    Participation:
>	We will advertize the session as "Confctrl Meeting"
>	using the LBL sd program [v1.11 or later], which in turn will
>	invoke vat [v1.56 or later].  For detailed information about MBONE 
>	participation, see the file venera.isi.edu:mbone/faq.txt.  
>
>


From rem-conf-request@es.net Mon Apr 26 10:22:04 1993
Date: Mon, 26 Apr 1993 09:58:13 PDT
Sender: Ron Frederick <frederic@parc.xerox.com>
From: Ron Frederick <frederic@parc.xerox.com>
To: rem-conf@es.net
Subject: nv 3.1 available
Content-Length: 679
Status: RO
X-Lines: 15

Hello everyone...

Version 3.1 of the 'nv' network video tool is now available for anonymous
ftp from parcftp.xerox.com, in the /pub/net-research directory. The main
new feature of this version is that it is able to decode video sent by the
latest version of the CU-SeeMe video tool for the Macintosh, when sent
through the reflector which adds an RTP header to it. If you're interested
in watching the Global Schoolhouse broadcast on Wednesday, you'll need
to get this new version of 'nv'.

In addition to the sources, binary versions are available for the Sun 4 and
SGI platforms. A DECstation binary version should be available soon.
--
Ron Frederick
frederick@parc.xerox.com

From rem-conf-request@es.net Mon Apr 26 21:35:09 1993
Date: Mon, 26 Apr 93 20:57:26 PDT
From: blc@mentat.com (Bruce Carneal)
To: rem-conf@es.net
Subject: Re: XTP...
Content-Length: 4407
Status: RO
X-Lines: 89

 Ran Atkinson <atkinson@itd.nrl.navy.mil> writes:

> Things to keep in mind about XTP.
> 
>   -- It was a very useful _research_ project and protocol

Gee, I guess that means we'll have to send back all those checks we've deposited
from customers! :-)/2

>   -- It cannot be extended in any way because there are no options whatever.

There are several ways to extend XTP if that is deemed necessary.  For example:
	1) New packet types (almost unlimited extension here within a new ptype)
	2) Fill in current Must Be Zero fields (e.g. the two CNTL rsrv fields)
	3) Define new weird and wonderful address family types
	4) Define new service type field values for the address segment
	5) Fill in current Must Be Zero bits in the cmd word
	6) Define new backward compatible conventions for underspecified fields
	7) Finally, up the XTP version field by one if drastic surgery is needed

To be specific, technique 4 is being used in a proposal by TUB and Mentat
to add variable length Quality of Service information at the tail end of the
address segment.  Since the address segment only travels with FIRST and PATH
packets the proposed extension does not contribute to general header bloat,
nor should it pose interoperability problems with current implementations
(excepting martinets:-).

>   -- There are no provisions or even hooks for security and it _cannot_
>      be secured in its present form.
Cannot is a strong word.  We (mentat) have not worked on a security proposal
yet but with routes and variable length address segments outside the main
data path it is hard to see what would stand in the way.  Was your point
simply that XTP does not currently have an explicit security mechanism?  If
not, could you explain why you feel XTP _cannot_ be secured through backward
compatible extensions?

>   -- Implementation is non-trivial.  UVa had several grad students working
>      on XTP implementation for several years -- they have gotten their version
>      and at least the KRM to be interoperable.  (I was not an XTP
>      implementor or participant during my time there).
I can't comment on the UVa experience, not having been there nor having talked
at length with the implementors.  I can say that XTP was significantly easier
to implement from scratch than was our commercial TCP/IP stack and that
stack was implemented *after* RFC1122 came out to make the lives of TCP
implementors a bit easier.

>   -- No implementations exist which are available at reasonable cost for
>      research purposes.	
> 
This depends on what you mean by reasonable, obviously.  Some people in
research define "reasonable" as "free, with source code".   These people
are not our "customers".

>   I am all for taking any/all good ideas that people see in XTP, but
> the place to apply those ideas is in other protocols (e.g. Multicast
> TCP extensions).  XTP does not get us some incredibly great advantage
> for audio/video conferencing that we can't get using what we are doing
> today.
> 
This is the main problem with every new protocol, "it's not TCP".  I do think
XTP has some advantages for audio/video conferencing, however, such as
	1) XTP routes and their implied resource reservation etc.
	2) NOERR, NOFLOW
	3) acceptable multicast mechanism
	4) address family indifference
	5) higher level framing mechanisms (BTAG fields and EOM)
	6) rate based flow control with burst regulation
I would hesitate to call this collection "incredibly great" but it isn't
chopped TCP either. :-)/2

Certainly the TCP/IP family is growing and will accomodate
just about anything that promises to make a buck, or fetch a grant :-), or
piques the interest of TCP devotees worldwide.  No reason it shouldn't, as the
reigning market champ.

> The assertion that XTP maps nicely to ATM VCI/VPIs is interesting.
> However, despite what you might have heard in the trade press, the
> installed base of Ethernet and FDDI is not going away and in fact the
> installed base of both continues to grow.  ATM is not now and in my
> opinion never will be a "universal protocol" in the sense that
> IPv4/CLNP are.  All the ATM zealotry in the world won't change that.

Agree.  XTP seems to map pretty well wherever we run it however.
The four XTP implementations that I have worked with all run over multiple
physical media, just like _real_ protocols.

Please get in touch if the above seems unclear or inaccurate.

blc@mentat.com

From rem-conf-request@es.net Tue Apr 27 07:59:06 1993
Date: Tue, 27 Apr 1993 10:44:36 -0500
To: Ron Frederick <frederic@parc.xerox.com>, rem-conf@es.net
From: Scott_Brim@cornell.edu
X-Sender: swb@nr-tech.cit.cornell.edu
Subject: Re: nv 3.1 available
Content-Length: 492
Status: RO
X-Lines: 8

I just want to point out that we still have a few unwanted features in the
Mac side, which are not mbone's fault but which mbone will appear to suffer
from.  In particular, there will be a some Mac "lurkers" which will be
receiving video but not sending during the conference -- they will
unfortunately show up in nv as gray windows.  Just ignore them.  As God
said (in one universe, anyway), we apologize for the inconvenience.
                                                        Scott


From rem-conf-request@es.net Tue Apr 27 10:34:07 1993
Date: Tue, 27 Apr 93 13:19:28 EDT
To: rem-conf@osi-west.es.net
From: Dick Cogger <R.Cogger@cornell.edu> (Richard Cogger)
Sender: rhx@132.236.199.25
Subject: Global School House
Content-Length: 741
Status: RO
X-Lines: 22

Rem-conf folks,

        The Global School House video conference had its last rehearsal
this morning.  The video may be up at times thru the day from some of the
sites.

Pick up from sd:  GSH video- use this one
                   and:  GSH Audio, recv only

The audio was a phone patch at cornell to a telephony style conference
bridge, and the sites have now signed off the audio.  I'll be listening
much of the day, if anyone has comments.    Please report any success or
otherwise in seeing/hearing it. 

The "real" event is scheduled for tomorrow morning, Wed April 28th at
10:30AM EDT.
Probably, you will see folks setting up starting about 9:30, if you tune in.

-Dick Cogger

        	       	       	       	       	       	-Dick


From rem-conf-request@es.net Tue Apr 27 12:55:25 1993
Date: Tue, 27 Apr 1993 15:38:21 -0500
To: Dick Cogger <R.Cogger@cornell.edu> (Richard Cogger),
        rem-conf@osi-west.es.net
From: Scott_Brim@cornell.edu
X-Sender: swb@nr-tech.cit.cornell.edu
Subject: Re: Global School House
Content-Length: 1295
Status: RO
X-Lines: 32

I never set up an sd conference before, so I messed up a few times.  

(1) the correct conferences to tune in on are the ones that say "use this
one" or something like that, for both audio and video.

(2) is there anyway to get rid of an unwanted conference in sd?  I don't
just mean in my display, but to expunge it from what everyone else sees?
                                                        Thanks ... Scott

At  1:19 PM 4/27/93 -0400, Dick Cogger (Richard Cogger) wrote:
  >Rem-conf folks,
  >
  >        The Global School House video conference had its last rehearsal
  >this morning.  The video may be up at times thru the day from some of the
  >sites.
  >
  >Pick up from sd:  GSH video- use this one
  >                   and:  GSH Audio, recv only
  >
  >The audio was a phone patch at cornell to a telephony style conference
  >bridge, and the sites have now signed off the audio.  I'll be listening
  >much of the day, if anyone has comments.    Please report any success or
  >otherwise in seeing/hearing it. 
  >
  >The "real" event is scheduled for tomorrow morning, Wed April 28th at
  >10:30AM EDT.
  >Probably, you will see folks setting up starting about 9:30, if you tune in.
  >
  >-Dick Cogger
  >
  >                                                        -Dick


From rem-conf-request@es.net Tue Apr 27 15:20:20 1993
Date: Tue, 27 Apr 93 17:54:56 EDT
From: atkinson@itd.nrl.navy.mil (Randall Atkinson)
To: rem-conf@es.net
Subject: XTP...
Content-Length: 83
Status: RO
X-Lines: 6


Agreed that consideration of XTP is out of scope.

Ran
atkinson@itd.nrl.navy.mil


From rem-conf-request@es.net Wed Apr 28 08:34:30 1993
Date: Wed, 28 Apr 93 11:05:20 EDT
From: chang@muon.nist.gov (Wo_Chang_x3439)
To: rem-conf@es.net
Subject: GSH video
Status: RO
Content-Length: 220
X-Lines: 8

Just wondering is the "Global School..." broadcasting now?
I'm running the sd (v1.13) and I don't see any items which
related to "Global School..." topic.  Am I missing something?

Thanks.

--Wo Chang <wchang@nist.gov>


From rem-conf-request@es.net Thu Apr 29 08:30:10 1993
Date: Thu, 29 Apr 93 08:09:35 -0700
To: rem-conf@es.net
From: kchong@uci.edu (Keith Chong)
X-Sender: kchong@mothra.nts.uci.edu
Subject: Solaris 2.0
Content-Length: 774
Status: RO
X-Lines: 22



Hi all,
  We are in the process of purchasing a couple of SparcStation Classics to
join the MBONE and was wondering if anyone out there has an experience with
 nv, sd, and vat on  Solaris.  I am wondering if anyone has recompiled them
on Solaris, and if not what are the plans to porting them.

I am also wondering if the multicast routing of Solaris actually works.

On a side note I have sent a message to rem-conf-request asking to be added
and I have gotten no reply and I don't think I have been added because I
have not recieved any messages yet.  BTW I sent the message on 4/26.  So if
the owner of the list is out there can you please add me to the list.

Please reply to me at KChong@uci.edu for I will not know when I will be add
to this list.

Thanks 

Keith 


From rem-conf-request@es.net Thu Apr 29 10:25:18 1993
Date: Thu, 29 Apr 93 13:08:41 EDT
From: hgs@research.att.com (Henning G. Schulzrinne)
To: kchong@uci.edu, rem-conf@es.net
Subject: Re: Solaris 2.0
Content-Length: 351
Status: RO
X-Lines: 11

Nevot has been compiled on Solaris 2.1, using the Sun ANSI compiler.
No extensive testing has been done, though, as I don't have physical
access to a SPARC running Solaris.

Henning
---
Henning Schulzrinne (hgs@research.att.com)
AT&T Bell Laboratories  (MH 2A-244)
600 Mountain Ave; Murray Hill, NJ 07974
phone: +1 908 582-2262; fax: +1 908 582-5809


From rem-conf-request@es.net Thu Apr 29 12:02:47 1993
Date: Thu, 29 Apr 1993 11:44:59 -0800
To: rem-conf@es.net
From: smith@sfu.ca (Richard Smith)
X-Sender: smith@popserver.sfu.ca
Subject: Archive site for rem-conf
Content-Length: 201
Status: RO
X-Lines: 6

        I am looking for information on Internet Video conferencing for IBM
PC and/or RS600 based machines. Also is there an archive site for the this
mailing list (rem-conf)? Thanks in advance

...r


From rem-conf-request@es.net Fri Apr 30 09:52:06 1993
Date: Fri, 30 Apr 93 12:31:07 EDT
From: Paul Stewart <stewart@ipl.rpi.edu>
To: rem-conf@es.net
Subject: FDDI => Unicast + Ether => Multicast
Content-Length: 808
Status: RO
X-Lines: 15

  I've been informed that this is rapidly becoming an FAQ, but not quite 
knowing where the mail archives are, I am unable to peruse these.  I have
installed a Sun FDDI SBus card to a SPARCStation 10/41 running 4.1.3, with
a multicast kernel.  The bf driver install fails due to undefined kernel
symbols, apparently during the modload.  I've been informed that A) FDDI
is not compatible with multicast, and B) There is a way to get around 
this failure by having the Ethernet continue to do mcast, and have FDDI 
do unicast.
  Could someone email me with a procedure for doing this?  I've mailed in
a request for addition to this mailing list, but haven't gotten any 
responses, so I can't really be sure that I'll get it if you mail to the
list.  Thanks for any help you can supply in this matter.

--
Paul

