From rem-conf-request@es.net Mon Mar  1 01:35:24 1993
To: rem-conf@es.net
Subject: new version of sd (v1.10) available
Date: Mon, 01 Mar 93 00:57:04 PST
From: Van Jacobson <van@ee.lbl.gov>
Status: RO
Content-Length: 609
X-Lines: 13

A new version of sd is available for anonymous ftp from ftp.ee.lbl.gov,
file sd.tar.Z.  This version will use the right ttl option for ivs 2.1
(-T instead of -t).  Default video if no format specified is now nv
instead of ivs.  Whiteboard now defaults to "allowed" (+w instead of -w).
It is also now possible to add to or modify the the menus under the
"audio", "video" & "whiteboard" buttons (e.g., to add another video
format).  Finally, several bugs & ease-of-use problems in the "new
session" were fixed.

Let us know (via mail to sd@ee.lbl.gov) if there are any problems
or suggestions.  Thanks.

 - Van

From rem-conf-request@es.net Mon Mar  1 16:14:13 1993
To: rem-conf@es.net
Cc: deering@parc.xerox.com
Subject: Re: CERFnet Seminar: MBONE - the Multicast Backbone
Date: Mon, 1 Mar 1993 15:54:36 PST
Sender: Steve Deering <deering@parc.xerox.com>
From: Steve Deering <deering@parc.xerox.com>
Status: RO
Content-Length: 2795
X-Lines: 79

The CERFnet folks have agreed to let the Seminar that Pushpendra Mohta
advertised a month ago be A/Vcast on the MBone.  (It'll also give us
something nifty to demo to the seminar atendees, to have some of you folks
talk and wave back.)  The seminar is from 9 am to 4 pm California Time,
this Wednesday, March 3rd.  It is currently being advertised in 'sd', with
the following parameters:

	audio: address 224.6.6.4, ttl 191, port 4664, id 0
	video: address 224.6.6.5, ttl 127, port 4666  (nv format)

Here's part of Pushpendra's original announcement:

 CERFnet presents...
 
 Technology Update Seminar: 
 
 MBONE - the Multicast Backbone  
 	Videoconferencing Over the Internet       
 
 
 March 3, 1993
 9:00 a.m. to 4:00 p.m.
 San Diego Supercomputer Center
 San Diego, California
 
 
 Stay current with the latest Internet technologies - learn about the 
 MBONE - the multicast backbone now being used for experimental 
 videoconferencing over the Internet. This one day seminar, featuring 
 one of the architects of the MBONE, Steve Deering of Xerox PARC, will 
 tell you everything you need to know to understand what it is and 
 where it will lead. In addition, learn how you can become a part of 
 the MBONE project by joining CERFnet's MBONE Testbed.
 
 
 What is the MBONE?
 
 The MBONE is a virtual network and allows videoconferencing to the 
 desktop.  It is layered on top of portions of the physical Internet to 
 support routing of IP multicast packets. The MBONE is an outgrowth 
 of the first IETF "audiocast" experiments in which live audio and 
 video were multicast from the IETF meeting site to destinations 
 around the world. 
     
 
 Why is it important?
 
 The MBONE is the next step forward in the internetworking 
 environment. This is the leading edge of network engineering. It is 
 the basis for the applications of the near future. By participting in 
 the CERFnet MBONE Testbed you will be among the first network 
 sites to have multicast audio and video directly to your desktop. 
 
 Attend the seminar to learn more about the MBONE, and for 
 information on what you'll need  to participate in the CERFnet 
 MBONE Testbed.
 
 
 Agenda:
 
 Steve Deering, a member of the research staff at Xerox PARC, will 
 discuss 
 
 What it is: multicasting, tunnelling, the virtual topology, etc.
 
 How it works: Protocols: DVMRP - the distance vector multicast routing
 protocol; MOSPF - the IP multicast extension to the OSPF routing protocol;
 hardware; configuration; and administration. 
 
 The results of previous experiences: IETF, ISOC broadcasts
 
 Applications: audio and video 
 
 
 Pushpendra Mohta, Director of Engineering for CERFnet, will discuss
 
 CERFnet MBONE Testbed specifics: hardware, software, configuration, what
 you will need to participate
 

From rem-conf-request@es.net Tue Mar  2 00:50:01 1993
From: rogers@sled.gsfc.nasa.gov (Scott W. Rogers)
Subject: MultiCast for SUN FDDI/S sbus cards ?
To: rem-conf@es.net
Date: Tue, 2 Mar 1993 03:39:10 -0500 (EST)
X-Mailer: ELM [version 2.4 PL17]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 900
Status: RO
X-Lines: 25

I have the MULTICAST patches for SunOS 4.1.3 installed an working with
ethernet on my IPC.

I'm trying to install SUN's FDDI/S (an FDDI Single Attached Station
sbus) interface card.

Neither the loadable nor linkable modules will work.  It complains
about missing multicast stuff

    ld: Undefined symbol
    _addmultiaddr
    _delmultiaddr


Does anybody have a patch for this.  I'll be logging a call to SUN
tomorrow, but I don't expect much help since they don't support
multicasting in 4.1.3.

Any help would be greatly appreciated!  :-)
-- 
------------------------------------------------------------------------
Scott W. Rogers   <rogers@sled.gsfc.nasa.gov>               301-286-1377
NASA Goddard Space Flight Center                       FAX: 301-286-5152
Computer Networking Branch - Code 933 -- Greenbelt, MD 20771
------------------------------------------------------------------------

From rem-conf-request@es.net Tue Mar  2 08:48:21 1993
From: Ron Frederick <frederic@parc.xerox.com>
To: rem-conf@es.net, rogers@sled.gsfc.nasa.gov
Subject: Re: MultiCast for SUN FDDI/S sbus cards ?
Date: Tue, 2 Mar 1993 08:25:50 PST
Status: RO
Content-Length: 1744
X-Lines: 35

Scott writes:
> I'm trying to install SUN's FDDI/S (an FDDI Single Attached Station
> sbus) interface card.
> 
> Neither the loadable nor linkable modules will work.  It complains
> about missing multicast stuff
> 
>     ld: Undefined symbol
>     _addmultiaddr
>     _delmultiaddr

These routines can be found in sunif/if_subr.c. The multicast support has
completely changed the data structure used to store what multicast addresses
the driver is currently listening for. The addmultiaddr & delmultiaddr
routines, which were already actually operating on ethernet addresses, were
renamed ether_addmulti and ether_delmulti, and they were expanded to operate
on multiple address families and handle ranges of addresses.

Apparently, the FDDI driver decided to go off & use the ethernet multicast
data structures, and so it also wanted to reference these routines. That's not
going to be easy to do, though, given the multicast changes. Providing a
version of addmultiaddr() and delmultiaddr() with the right calling interface
isn't a problem, but you'd also need to change any code in the FDDI driver
which referenced the data structure these routines build, since it has
changed from an array to a linked list.

You _might_ be able to drop the old routines back in almost as-is, being
careful to only use them for the FDDI driver. However, I think you'll run
into some include file conflicts, and you'll also have to make sure that
none of the code thinks the FDDI interface is multicast capable.

I don't suppose there's any chance of getting sources to the FDDI driver?
That would be by far the best approach. You could then switch it over to
honoring the new form of the data structure, and possibly even make it
fully support IP multicast.

From rem-conf-request@es.net Wed Mar  3 13:15:15 1993
To: video@cic.net
Cc: rem-conf@es.net
Subject: Can anyone help?
Date: Wed, 03 Mar 93 15:59:45 -0500
From: "Thomas A. Easterday (+1 313 998 6285)" <tom@cic.net>
Status: RO
Content-Length: 1055
X-Lines: 26


------- Forwarded Message

Received: from spruce.cic.net by nic.cic.net (4.1/SMI-4.1)
	id AA27285; Wed, 3 Mar 93 15:03:08 EST
Errors-To: owner-tech-interest@cic.net
Received: by spruce.cic.net id AA26168
  (5.65c/IDA-1.4.4 for tech-interest-out); Wed, 3 Mar 1993 14:58:07 -0500
Received: from anlvm.ctd.anl.gov by spruce.cic.net with SMTP id AA26162
  (5.65c/IDA-1.4.4 for <tech-interest@cic.net>); Wed, 3 Mar 1993 14:58:06 -0500
Message-Id: <199303031958.AA26162@spruce.cic.net>
Received: from ANLVM.CTD.ANL.GOV by ANLVM.CTD.ANL.GOV (IBM VM SMTP
R1.2.2ANL-MX) with BSMTP id 9509; Wed, 03 Mar 93 14:03:09 CST
Date: Wed, 03 Mar 93 14:03:09 CST
From: "Larry Amiot" <B10523@anlvm.ctd.anl.gov>
To: <tech-interest@cic.net>
Sender: tech-interest-request@cic.net
Errors-To: owner-tech-interest@cic.net
Precedence: bulk

There is a company called INSOFT (I think that is the name) that sells
packet video teleconferencing software. I cannt seem to locate a
telphone number or location for the company. Can anyone help....Larry

------- End of Forwarded Message


From rem-conf-request@es.net Wed Mar  3 13:33:31 1993
Date: Wed, 3 Mar 93 13:23:32 PST
From: ari@es.net (Ari Ollikainen)
To: tom@cic.net, video@cic.net
Subject: Re: Can anyone help?
Cc: 03@viipuri.nersc.gov, 03.09@14.nersc.gov, 93@viipuri.nersc.gov,
        CST@viipuri.nersc.gov, Date.@es.net, Mar@viipuri.nersc.gov,
        Wed@viipuri.nersc.gov, rem-conf@es.net
Status: RO
Content-Length: 1087
X-Lines: 29

----------Forwarded--------------
Date: Wed, 03 Mar 93 14:03:09 CST
From: "Larry Amiot" <B10523@anlvm.ctd.anl.gov>
To: <tech-interest@cic.net>
Sender: tech-interest-request@cic.net
Errors-To: owner-tech-interest@cic.net
Precedence: bulk

There is a company called INSOFT (I think that is the name) that sells
packet video teleconferencing software. I cannt seem to locate a
telphone number or location for the company. Can anyone help....Larry  

-----------------------------------

Certainly. InSoft is in Mechanicsburg, PA. 
           (717) 730-9501  FAX: (717) 730-9504
	   e-mail info@insoft.com

If you want an opinionated asessment of Communique! I would be happy to 
oblige...



~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ari Ollikainen    ari@es.net     National Energy Research Supercomputer Center
ESnet (Energy Sciences Network)   Lawrence Livermore National Laboratory       
510-423-5962  FAX:510-423-8744   P.O. BOX 5509, MS L-561, Livermore, CA 94550  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


From rem-conf-request@es.net Wed Mar  3 17:35:20 1993
Date: Wed, 3 Mar 93 20:26:27 EST
From: herman@sunpix.East.Sun.COM (Herman Towles - Sun NC Development Center)
To: tom@cic.net
Subject: Re: Can anyone help?
Cc: rem-conf@es.net
Content-Length: 40
Status: RO
X-Lines: 4


InSoft, Inc.
Grantham, PA
717-766-6290

From rem-conf-request@es.net Wed Mar  3 17:54:31 1993
Date: Wed, 3 Mar 93 17:45:22 PST
From: ari@es.net (Ari Ollikainen)
To: herman@sunpix.east.sun.com, tom@cic.net
Subject: Re: Can anyone help?
Cc: rem-conf@es.net
Status: RO
Content-Length: 1017
X-Lines: 27

Sorry Herman, but InSoft is now in Mechanicsburg:

	Executive Park West I, 
	Suite 307,
	4718 Old Gettysburg Road, 
	Mechanicsburg, PA 17055

	(717) 730-9501
    FAX (717) 730-9504
   E-mail info@insoft.com

Since I visited their offices in January, I can vouch for the accuracy of this 
address information over the old Grantham location. I guess you folks
at Sun NC haven't had any visits with the InSoft folk for a while, eh?

I stand by the accuracy of my information.

BTW: How about posting some information to rem-conf about the VideoPix 
follow-on and about the software compressors in the new Image Library?


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ari Ollikainen    ari@es.net     National Energy Research Supercomputer Center
ESnet (Energy Sciences Network)   Lawrence Livermore National Laboratory       
510-423-5962  FAX:510-423-8744   P.O. BOX 5509, MS L-561, Livermore, CA 94550  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


From rem-conf-request@es.net Thu Mar  4 10:46:03 1993
From: Pavel Curtis <Pavel@parc.xerox.com>
Sender: Pavel Curtis <pavel@parc.xerox.com>
Fake-Sender: pavel@parc.xerox.com
To: rem-conf@es.net
Subject: [Steve Putz: "Computer talk radio on horizon"]
Date: Thu, 4 Mar 1993 10:23:47 PST
Status: RO
Content-Length: 1064
X-Lines: 34

Can any of you shed some light on this?

	Pavel

------- Start of forwarded message -------
Date:	Thu, 4 Mar 1993 09:22:17 -0800
From:	Steve Putz <putz@parc.xerox.com>
To:	mediaspaces.PARC@xerox.com
Subject: "Computer talk radio on horizon"

Does anyone know the story behind the "Computer talk radio on horizon" article on the front page of today's San Jose Mercury News?  It starts:

------------
Computer talk radio on horizon

New Internet program is hailed as the future

New York Times

SAN FRANCISCO - Talk radio is coming to desktop computers.
  Within a few weeks, a Virginia-based entrepreneur plans
to begin broadcasting a weekly 30-minute radio talk show
on Internet, the global computer network that links more
than 10 million scientists, academics, engineers and
high-tech industry executives.
-----------

It goes on at length but fails to give much detail.  The only name
mentioned (besides some quotes from Negroponte, etc.) is Carl Malamud
from Alexandria, Va.

So what is the relationship to vat, etc.?

------- End of forwarded message -------

From rem-conf-request@es.net Thu Mar  4 11:30:54 1993
To: rem-conf@es.net
Cc: Pavel@parc.xerox.com
Subject: Malamud's Talk Radio
Date: Thu, 04 Mar 93 11:14:48 -0800
From: berc@src.dec.com
X-Mts: smtp
Status: RO
Content-Length: 2585
X-Lines: 56


For those who haven't seen it.

------- Forwarded Message

Date: Wed, 3 Mar 93 09:50:03 EST
Sender: ietf-request@IETF.CNRI.Reston.VA.US
From: Carl Malamud <carl@malamud.com>
To: ietf@CNRI.Reston.VA.US
Subject: Internet Talk Radio
Org: Internet Talk Radio: Flame of the Internet

I was kind of hoping to defer an announcement on this project until IETF
week, but the publicity mill is moving fast enough that a note seems more
appropriate now.

On March 31, I'm launching a new service on the Internet called Internet
Talk Radio.  Internet Talk Radio is a "radio" metaphor: professionally
produced radio programs that show up on the net as audio files.  You
can multicast them, or you can simply FTP the files and play.  All I'm
doing is producing the information and you are free to distribute at
will using the protocol of your choice and to change the encoding format
of the data to suit your computing platform.

Distribution starts from UUNET and fans out to regional networks in an 
attempt to try and avoid excessive duplicate transfers.  If you're a local
net, you should contact your service provider.  If you're a service provider,
send mail to info@radio.com and I'll send you back instructions.  If you're
in Europe, mcsun at EUnet will be the initial spool point.  If you're in
Japan, WIDE and IIJ will do distribution.  If you're a Alternet customer,
you'll simply anonymous FTP from UUNET.  We are not using the MBONE, although
the networks that constitute the MBONE is certainly welcome to use that
distribution medium if they feel that it is appropriate.

The first show is "Geek of the Week" (;-), an interview show with members
of the community.  The program will be around a half-hour (e.g., 15 Mbytes
in standard PCM, 8000 sample, 8 bit, mu-law encoding).  The program is
sponsored by Sun Microsystems and O'Reilly & Associates.  Before the ugly
spectre of AUP violations flames up .... we use a National Public Radio-style
ack scheme consisting of just a couple of sentences.  Indications from at
least two of the large government networks are that we are compliant with
their Appropriate Use Policies.

No need to do anything now ... the service doesn't start until March 31.  I
will give a 30-minute talk at the IETF on April 1 to explain this new
service and to answer questions.  I figured since the New York Times is
running an article on this, its probably appropriate to at least get some
preliminary information out to the net.

Send Inquiries to: info@radio.com

Carl Malamud
Internet Talk Radio
"Flame of the Internet"

------- End of Forwarded Message

From rem-conf-request@es.net Thu Mar  4 11:49:23 1993
Date: Thu, 4 Mar 93 14:33:06 EST
From: carl@malamud.com (Carl Malamud)
To: Pavel@parc.xerox.com
Subject: Re: [Steve Putz: "Computer talk radio on horizon"]
Cc: rem-conf@es.net
Org: Internet Talk Radio
Status: RO
Content-Length: 14545
X-Lines: 306


Hi -

I can shed a bit of light on this.  The article is about a service that
I'm trying to start, a "radio" metaphor for the Internet.  What I'm doing
is producing sound files which will then be moved around the world starting
with UUNET in Virginia.  The first few levels of fanout are almost certainly
FTP-based. 

At some point, people might want to multicast files.  Sun, for example,
will multicast the files inside of their network.  I'm not planning to 
abuse the MBONE since I'm just a source of data, not a network operator.
My feeling is that any decision to multicast my programs over the MBONE
are made by the collection of networks that form the MBONE.

I'm attaching a copy of an article that is in this month's issue of
ConneXions.  Please let me know if you have any questions.

Carl Malamud

The following article is reprinted with permission from ConneXions.  ConneXions
is published by Interop Company.  More information can be obtained from the
electronic mail address ole@interop.com.

                       Internet Talk Radio
                  Carl Malamud (carl@radio.com)

	Over the past few years, two trends have come together to
present an opportunity for a new type of journalism.  On the one
hand, the trade press has focused on marketing and product
reviews, leaving an ever-larger gap for a general-interest,
technically-oriented publication focused on the Internet.  At the
same time, the Internet has made great progress in supporting
multimedia communication, through standards such as IP
multicasting and MIME messaging.

	Internet Talk Radio attempts to fuse these two trends and
form a new type of publication: a news and information service
about the Internet, distributed on the Internet.  Internet Talk
Radio is modeled on National Public Radio and has a goal of
providing in-depth technical information to the Internet
community.  The service is made initially possible with support
from Sun Microsystems and O'Reilly & Associates.  Our goal is to
provide a self-sufficient, financially viable public news service
for the Internet community.

Head: Flame of the Internet

	The product of Internet Talk Radio is an audio file,
professionally produced and freely available on computer
networks.  To produce these files, we start with the raw data of
any journalistic endeavor: speeches, conference presentations,
interviews, and essays.

	This raw information is taped using professional-quality
microphones, mixers, and DAT recorders.  The information is then
brought back to our studios, and edited and mixed with music,
voice overs, and the other elements of a radio program. The "look
and feel" we strive for is akin to "All Things Considered" or
other programs that appeal to the general interest of the
intelligent listener.

	Our goal is hit the topics that don't make it into the trade
press.  Instead of SNMP-compliant product announcements, we want
to present descriptions of SNMP.  Instead of articles on GOSIP,
we want to describe the latest Internet Drafts and place them in
perspective.  Instead of executive promotions, we want to give
summaries of mailing list activity and network stability. 
Instead of COMDEX, we want to cover the IETF.

Head: Town Crier to the Global Village

	The result of Internet Talk Radio's journalistic activities
is a series of audio files.  The native format we start with is
the Sun Microsystems .au format, closely related to the NeXT .snd
format.  This format consists of the CCITT Pulse Code Modulation
(PCM) standard of 8 bits per sample and a sampling rate of 8000
samples per second, using the u-law [ed. use greek letter mu]
encoding (a logarithmic encoding of 8 bit data equivalent to a 14
bit linear encoding).  A half-hour program would thus consist of
64,000 bits per second or 15 Mbytes total.

	Programs are initially spool on UUNET, the central machines
of the Alternet network.  Files are then moved over to various
regional networks for further distribution.  For example, EUnet,
a commercial network provider for Europe with service in 24
countries, will act as the central spooling area for the European
region.  The Internet Initiative Japan (IIJ) company will provide
the same service for Japanese networks.

	The goal of coordinated distribution is to reduce the load
on key links of the network.  Transferring a 15 Mbyte file over a
64 kbps link does not make sense during peak times.  On the other
hand, a leased line has the attribute that a bit unused is a bit
forever gone.  Transferring large files at low priority in non-
peak times has little or no incremental cost.
	
	Files thus move from the UUNET central spool area, to
regional spools, to national and local networks.  We anticipate
most of this transfer to be done using the FTP protocols, but
some networks are discussing the use of NNTP news groups and
MIME-based distribution lists.

	It is important to note that Internet Talk Radio is the
source of programming and does not control the distribution. 
These files are publicly available, subject only to the simple
license restrictions of no derivative work and no commercial
resale.  

	Distribution is controlled, as with all other data, by the
individual networks that make up the Internet.  We intend to work
closely with networks all over the world to ensure that there is
some coordination of distribution activity, but ultimate control
over this data is in the hands of those people who finance,
manage, and use networks.

	We don't believe indiscriminate use of anonymous FTP is the
proper method for distributing large archives.  Previous
experience with ITU standards, with RFC repositories, and with
large software archives such as the X Windows System indicates
that setting up a top-level distribution hierarchy goes a long
way towards alleviating network load.

	Even with a top-level hierarchy, however, there will always
be anonymous FTP sites and there will always be people that go to
the wrong FTP server.  This behavior is largely mitigated by
setting up enough "local" servers and publicizing their
existence.  Like any large distributor of data, we are mindful of
the load on the transcontinental and regional infrastructures and
will take aggressive steps to help minimize that load.

Head: Asynchronous Times, Asynchronous Radio

	Once files have made their way to a local or regional
network, they are moved to the desktop and played.  Once again
the individual users of the network decide how to present data. 
We hope to see a wide variety of different ways of having our
files played and only list a few of the more obvious methods.

	The simplest method to play a .au file on a Sparcstation is
to type "play filename."  If the file is placed on a Network File
System (NFS) file system on a central server, the user simply
mounts the file system and plays the file.  Alternatively, the
user copies the file to a local disk and plays it.

	More adventuresome playing of files uses multicasting.  A
simple multicast program called "radio" for a local Ethernet is
available from CWI, the mathematics institute of the Netherlands. 
A more sophisticated approach, IP multicasting, allows a program
to reach far beyond the confines of the Ethernet.

	IP multicasting might be used on a local basis, or can have
a global reach.  There is a consortium of regional networks that
have formed the Multicast Backbone (MBONE), used for audio and
video programming of key conferences such as the Internet
Engineering Task Force.

	Internet Talk Radio does not assume use of the MBONE for
playing files.  Needless to say, the operators of the MBONE are
free to play Internet Talk Radio files (and we would be delighted
if this happens), but it is up to the local network affiliates to
determine how and when they distribute this audio data.

	In many cases, people will want to play files on a wide
variety of different platforms.  The Sound Exchange (SOX) program
is a publicly-available utility that easily transforms a file
from one format to another.  Using this utility, the Macintosh,
Silicon Graphics, DECstation, PC, and many other platforms can
play Internet Talk Radio files.

Head: Geek of the Week

	In the spirit of dignified, conservative programming, the
first production from Internet Talk Radio is dubbed Geek of the
Week.  Geek of the Week features technical interviews with key
personalities on the Internet.  Some of the people who have
agreed to appear on Geek of the Week include Daniel Karrenberg of
the RIPE NCC, Dr. Marshall T. Rose of Dover Beach Consulting,
Milo Medin of the NASA Science Internet, and Daniel Lynch of
Interop Company.

	Geek of the Week focuses on technical issues facing the
Internet.  This initial program is sponsored by Sun Microsystems
and O'Reilly & Associates.  Their support makes it possible for
Geek of the Week to be produced professionally and then to be
distributed at no charge.

	One of the issues that Internet Talk Radio faces are the
vestiges of Appropriate Use Policies (AUPs) that linger from the
original ARPANET days.  While Sun Microsystems and O'Reilly &
Associates view Internet Talk Radio in terms of an investigation
of on-line publishing, of multicasting, and other engineering
issues, we feel it important that our sponsors are given due
credit in the programs.

	At first glance, this smacks of the crass and commercial. 
Indeed, it smacks of advertising.  Jumping to that conclusion,
however would be a simplistic mistake.  The Appropriate Use
Policies were formulated to guarantee that networks are used for
the purposes envisioned by the funding agents.  In the case of an
AUP-constrained networks such as the NSFNET, this means that use
of the network must benefit U.S. science and engineering.  

	We feel that an in-depth interview with Internet architects
clearly falls within the purview of all AUP policies.  However,
we understand that certain networks may not accept certain types
of programming.  For this reason, our central spool areas are
carefully picked so they are AUP-free.  This way, if a network
feels the programming is inappropriate, they can simply inform
their users not to obtain or play the files.

	It should be noted that one advantage of supporting the
professional dissemination of news and information up-front is
that the user is not directly charged.  Somebody has to pay for
information to be produced, and the sponsorship model means that
copy protection, accounting, security, and all the other
complications of a charging model are avoided and that high-
quality news and information becomes increasingly available on
the Internet.

Head: The Medium is the Message

	While Geek of the Week is our flagship program, we intend to
intersperse mini-features throughout.  The Incidental Tourist,
for example, will feature restaurant reviews and other travel
information for sites throughout the world.  The Internet Hall of
Flame will highlight non-linear behavior on mailing lists, and we
will have periodic book reviews by Dan Dorenberg, one of the
founders of Computer Literacy Books.

	The logical extension to Geek of the Week is to begin
coverage of industry functions.  To date, we have received
permission to tape for later rebroadcast sessions and
presentations at the European RIPE meetings, the IETF, and at the
INTEROP Conferences.  We are negotiating with other industry
forums to try and establish permission to cover additional
conferences.

	Our hope is to begin providing news summaries of these key
conferences.  If you can't make it to the IETF, for example,
Internet Talk Radio would like to provide a half-hour news
summary describing what happened on each day.

	The next logical step is to begin producing analysis of key
technical topics.  Here, we look at in-depth (e.g., 15 minute)
summaries of technical topics such as MIME, proposals for the
next IP, SNMP v. 2, or the architecture of the Global Internet
Exchange (GIX).  We would also furnish analysis of political
topics, such as the POISED effort to reorganize the Internet
standards process, or the background of the IPv7 debate.

	Eventually, our hope is to combine all these reports
together and form a daily news broadcast to the Internet.  When
you walk in and start reading your mail, you simply click on the
"radio" icon and listen to Geek of the Week while deleting
messages from the more hyperactive mailing lists.

Head: Tomorrow is the Future

	The "radio" metaphor was carefully chosen.  We wanted an
alternative to plain ASCII files, yet did not feel that the
Internet infrastructure was ready for regular video feeds. 
Production of video or true multimedia required an order-of-
magnitude higher investment in production facilities.  After all,
we know bad TV since we see so much of it.

	Eventually, Internet Talk Radio wants to go beyond the
confines of the simple radio metaphor.  Already, we describe the
service as asynchronous radio, recognizing that our listeners can
start, stop, rewind, or otherwise control the operation of the
radio station.

	As a multicasting infrastructure gets deployed throughout
the Internet, we see the opportunity to expand the radio metaphor
and begin the creation of a truly new news medium.  Multicast
groups and videoconferencing tools allow the creation of an
Internet Town Hall, a moderated forum with a very wide reach or
games shows like Name That Acronym where everybody gets to play.

	Because we are on the Internet, we can add a wide variety of
different programming techniques.  While listening to a series of
interviews about MIME messaging, for example, you might also
scroll through a series of Gopher menus that hold more
information about the MIME standards, or search a WAIS database
for a biography of the speakers.

	We hope that Internet Talk Radio will be the first of many
such information services on the Internet, supplementing the
random anarchy of news and mailing lists with professionally
produced news and information.  Indeed, we hope that Internet
Talk Radio forms the first of many "desktop broadcasting"
efforts.

	Internet Talk Radio debuts at the Columbus IETF at the end
of March.  Stay tuned for more information.

Head: For More Information

	Guido van Rossum, FAQ: Audio File Formats,
ftp.cwi.nl:/pub/AudioFormats2.10.  An excellent introduction to
audio formats, encoding, and other information about sound files
on different platforms.  This same site also has copies of the
SoundExchange (SOX) program for translating files into different
audio formats, and the Radio program for playing a sound file on
an Ethernet.


From rem-conf-request@es.net Fri Mar  5 07:20:39 1993
From: chang@chang.austin.ibm.com (kay chang)
Subject: Talk Show aduio format
To: carl@malamud.com
Date: Fri, 5 Mar 93 9:00:08 CST
Cc: rem-conf@es.net
Status: RO
Content-Length: 742
X-Lines: 19

Carl,
For those of us not able to go to the Spring Interop,
can you please answer two questions for me ?

1.You mentioned the .au audio format is for SUN, does this mean you have to
  have Sparc to be able to listen ? I use RS/6000, is that implies there will
  be a format conflict ?
2.What is the procedure for me to listen  ?
Thank you.
                       Regards, Kay

--
-----------------------------------------------------------------------------
Kay Chang                                AIX Communication Architecture, IBM
Tel: (512) 838-3542                      E-mail: chang@chang.austin.ibm.com
Zip 2503, 11400 Burnet Rd.  Austin, Tx. 78758-3493
-----------------------------------------------------------------------------



From rem-conf-request@es.net Fri Mar  5 08:19:28 1993
Date: Fri, 5 Mar 93 11:05:19 EST
From: hgs@research.att.com (Henning G. Schulzrinne)
To: rem-conf@es.net
Subject: 'synchronous Ethernet'
Status: RO
Content-Length: 193
X-Lines: 7

I heard references to 'synchronous Ethernet', geared towards carrying
synchronous traffic such as voice and video. Anybody have any pointers
to technical details?

Thanks.

Henning Schulzrinne

From rem-conf-request@es.net Fri Mar  5 09:24:41 1993
From: Antonio.Desimone@att.com
Date: 5 Mar 93 17:20:09 GMT
To: Henning.Schulzrinne@att.com
Original-From: tds@hoserve.att.com (Tony DeSimone)
Content-Length: 670
Content-Type: text
Sender: Antonio_DeSimone@ATT.COM (Tony DeSimone)
Original-To: hgs@research.att.com
Cc: rem-conf@es.net
Subject: Re: 'synchronous Ethernet'
Reply-To: tds@hoserve.att.com (Tony DeSimone)
Original-Date: Fri, 5 Mar 93 12:20:09 EST
Status: RO
X-Lines: 16

>>>>> On Fri, 5 Mar 93 11:05:19 EST, hgs@research.att.com (Henning G. Schulzrinne) said:

Henning> I heard references to 'synchronous Ethernet', geared towards carrying
Henning> synchronous traffic such as voice and video. Anybody have any pointers
Henning> to technical details?

There is some activity at IBM and NSC, I believe, to bring a proposal
to the IEEE 802 committee for adding another channel to Ethernet for
real-time traffic.  I really mean an honest-to-god, isochronous
channel, on the same cable--not using packets at all, as I recall.  I
saw something about a demo they did somewhere, but I have no idea
about the details...

Sorry to be so vague.

Tony

From rem-conf-request@es.net Fri Mar  5 09:31:06 1993
To: hgs@research.att.com (Henning G. Schulzrinne)
Cc: rem-conf@es.net
Subject: Re: 'synchronous Ethernet'
Date: Fri, 05 Mar 93 17:18:19 +0000
From: Jon Crowcroft <J.Crowcroft@cs.ucl.ac.uk>
Status: RO
Content-Length: 588
X-Lines: 18



 >I heard references to 'synchronous Ethernet', geared towards carrying
 >synchronous traffic such as voice and video. Anybody have any pointers
 >to technical details?
 
 Henning,

an 'orrible attempt to add real time qos guaranteed services by adding
6Mbps worth of isoch traffic to standard csma cd - apparently it has
quite a few followers, and could be done fast and "real cheap now"...

its aimed at having 3 lots of 24-30 * 64kbps channels on top of normal
mess, so you could have your promary rate isdn  
phone excchange cross connect cake or what have you...and eat it!

 jon


From rem-conf-request@es.net Fri Mar  5 12:48:51 1993
Date: Fri, 5 Mar 93 15:33:45 EST
From: klemets@ground.cs.columbia.edu (Anders Klemets)
To: hyder@niwot.scd.ucar.EDU
Cc: rogers@sled.gsfc.nasa.gov, rem-conf@es.net
Subject: MultiCast for SUN FDDI/S sbus cards ?
Status: RO
Content-Length: 732
X-Lines: 17

> I also have been unable to get multicast installed in conjunction with
> Sun SBus FDDI cards.  If anyone has been successful please say so.

I have been using the Sun FDDI/S cards on SS-10's and regular
Sparcstations with IP multicast kernels for quite some time now.  
I cannot use IP multicast on the FDDI card but at least it can coexist
with the multicast kernel.

There is more to it than just defining addmultiaddr() and
delmultiaddr().  The size of the arpcom structure has changed which
causes the offset to a couple of fields in that structure to be
different.  This manifests itself as a bug with the symptoms that ARP
does not work on the FDDI interface.  I have a fix for this.  Send me
mail if you want it. 

Anders


From rem-conf-request@es.net Fri Mar  5 13:36:51 1993
Date: Fri, 5 Mar 1993 12:42:20 PST
Sender: Ron Frederick <frederic@parc.xerox.com>
From: Ron Frederick <frederic@parc.xerox.com>
To: rem-conf@es.net
Subject: Porting nv to other platforms
Status: RO
Content-Length: 1757
X-Lines: 32

Hello everyone...

Now that we seem to have momentarily stabilized with nv version 2.7, I
thought it would be a good time for me to share some additional state with
everyone about my plans for the next nv release...

Version 2.7 is out in source form for two reasons. First, several people were
simply interested in the fine detail about how some of the compression
code worked, and how I was using the Tk toolkit. In addition, though, I'm
hoping that having the sources available might encourage ports to other
platforms. In moving to the SGI, the code seemed to be pretty portable.
Except for the new frame grab routine, no changes were required at all --
just a recompile. I have received some code from Tatsuo Nagamatsu at
Sony to make 'nv' run on the Sony NEWS machines (Thanks!), and
would be interested in talking to anyone who wanted to see it running on
their favorite platform or with their favorite frame grabber.

Version 3.0 is already quite different in some of the machine independent
pieces, and slightly different in things like the grab routines, but I think
most of the effort of porting 2.7 should be directly usable in the next major
release. I'd like to have that available in time for the Columbus IETF, and
have it running on as many boxes as possible.

While 3.0 won't be ready for a few weeks, I'd be happy to provide alpha
sources for use in porting, as well as technical assistance. Just contact me
via email at <frederick@parc.xerox.com> if you're interested. The major
requirements are that you have X11 installed with the Tcl & Tk packages
from Berkeley. The current version of 'nv' is using Tcl 6.4 and Tk 2.3. I
may be switching to Tcl 6.6 & Tk 3.1 soon, if that isn't too painful.
--
Ron Frederick
frederick@parc.xerox.com

From rem-conf-request@es.net Fri Mar  5 13:57:54 1993
To: rem-conf@es.net
Subject: Re: 'synchronous Ethernet'
From: Keith Lantz <lantz@vicor.com>
Date: Fri, 05 Mar 93 13:23:20 -0800
Sender: lantz@vicor.com
Status: RO
Content-Length: 5625
X-Lines: 110

Copyright
ELECTRONIC ENGINEERING TIMES [EC14] via NewsNet
November 16, 1992
 
 
IBM, National take Ethernet to multimedia
 
    By LORING WIRBEL
 
 
 
    Las Vegas, Nev. - National Semiconductor Corp. and IBM Corp. will take
the wraps off a plan for isochronous (time-dependent) Ethernet services at
this week's Comdex/Fall, a week after making a related technology proposal
to the IEEE meeting in La Jolla, Calif.
 
     At the Multimedia Pavilion in Bally's Casino, National and IBM will
show multiple channels of interactive, bidirectional videoconferencing
using a variety of compression methods, including some proprietary
algorithms that the two companies refuse to discuss in detail.
 
    Last summer, National sources indicated that, even as they moved to
Asynchronous Transfer Mode (ATM) support for future multimedia networks,
they would upgrade Ethernet to handle high-quality live video (see Aug. 3,
page 1). The two companies revealed last week they will support
time-dependent voice and video services on a packet-based Ethernet LAN by
adding a 6-Mbit/second dedicated virtual-call channel on top of Ethernet's
10-Mbit/s packet channel. The system will use a switched-Ethernet hub
architecture of the type developed by such vendors as Kalpana Inc. and
Alantec Inc. IBM has not revealed whether it plans to extend the concept
to Token-Ring networks as well.
 
    The ``isoENET'' concept requires only some additional multiplexer
circuitry for a PC node and a central hub, which National will try to keep
to less than twice the price of traditional Ethernet controller chip sets.
Switch chips for the hub also will be part of the isoENET plans.
 
     Moreover, National and IBM have insisted that a dedicated isochronous
channel, creating a virtual circuit similar to a phone call, must be
developed for reasonable video performance.
 
    ``When you talk about packet traffic in general, there's a question of
the quality of service for video,'' said Mike Evans, director of
applications technology in National's corporate technology group. ``What
we say is, here's a dedicated pipe for isochronous multimedia, and you can
choose different qualities of service for local-area and wide-area use.''
 
    Whereas the fast-Ethernet proposals were made to IEEE's 802.3
committee for Ethernet, National and IBM made their proposal to the IEEE's
802.9 committee working on merged voice/ data/video networks. Apple
Computer Inc. (Cupertino, Calif.) expressed its support for the concept at
last week's La Jolla meeting, but it will not take part in this week's
Comdex demo.
 
    The 6.144-Mbit/s channel can provide 96 full-duplex B-style Integrated
Services Digital Network (ISDN) channels (64 kbits/s each) to every
Ethernet node. The 6-Mbit number represents a figure three times the
European E1 digital line rates and four times the U.S. T1 rates, allowing
easy interfaces to public networks. IsoENET nodes can thus connect locally
through a switched wiring hub, campus-wide through a PBX or
metropolitan-network services, or wide-area through the public network.
 
    Evans said that one of the reasons the proposal was structured this
way was to keep the concept compatible with narrow-band ISDN, Fiber
Distributed Data Interface (FDDI) LANs, ATM cell-relay formats and,
eventually, broadband ISDN. National is presenting portions of its
proposal to the American National Standards Institute's FDDI-II committee,
and to the ATM Forum, the industry coalition working on private extensions
of the B-ISDN ATM standard.
 
    The National-IBM proposal specifies 100-meter distances between nodes,
just like Ethernet, and requires local groups to be connected in a star
configuration through a central hub. The star is already required for
existing 10 Base T Ethernet networks, as well as in all the new
fast-Ethernet proposals before the IEEE. Evans said that Ethernet's
original bus-based configuration is dying out.
 
    The 6-Mbit channel employs the 4B/5B encoding used for FDDI and
specifies a call-setup protocol similar to the Q.931 protocol used in
ISDN. Evans said that the original ISDN call-setup model was simplified
for a LAN environment, but more functionality was added to support video
teleconferencing. In the Comdex demo, multiple Ethernet hubs will be
connected to an FDDI-II backbone, but Evans emphasized that isoENET does
not require the use of FDDI-II.
 
    ``For the smaller work group, isochronous Ethernet can be supported
with just the additional channels in the PC and the hub,'' Evans said.
``As you get to 50 or 60 users, you will use a backbone, but it needn't be
FDDI-II. You could also use ATM or other technologies.''
 
    PC nodes or hubs that are isoENET-compatible will be able to use all
the standard driver software developed for data-only Ethernet. The chip
sets for isoENET will also have the embedded intelligence to recognize
standard Ethernet nodes on a per-port basis, supporting mixed networks of
isochronous Ethernet nodes and ``dumb'' nodes that can only support data
packets.
 
    If all goes well, the current project authorization request before the
IEEE for a preliminary study will be upgraded to a formal proposal by
February. National is going ahead with silicon development now, for a
multiplexer-controller that can be added to its standard Ethernet chip
set.
 
    Evans said that National and IBM are already working with many
multimedia software developers for isoENET support, but he refused to
comment on any future links between the National-IBM networking project
and the IBM-Texas Instruments Inc. Mwave multimedia development
environment.



From rem-conf-request@es.net Fri Mar  5 18:51:29 1993
Date: Sat, 6 Mar 1993 12:27:06 +1030
To: Pavel@parc.xerox.com
Subject: Re: [Steve Putz: "Computer talk radio on horizon"]
From: simon@internode.com.au (Simon Hackett)
Reply-To: simon@internode.com.au
Cc: rem-conf@es.net, mbone@isi.edu
Sender: simon@internode.com.au
Repository: internode.com.au
Originating-Client: Zen.internode.com.au
Status: RO
Content-Length: 16514
X-Lines: 348

> New York Times
> 
> SAN FRANCISCO - Talk radio is coming to desktop computers.
>   Within a few weeks, a Virginia-based entrepreneur plans
> to begin broadcasting a weekly 30-minute radio talk show
> on Internet, the global computer network that links more
> than 10 million scientists, academics, engineers and
> high-tech industry executives.
> -----------
> 
> It goes on at length but fails to give much detail.  The only name
> mentioned (besides some quotes from Negroponte, etc.) is Carl Malamud
> from Alexandria, Va.
> 
> So what is the relationship to vat, etc.?
> 

No direct relationship to vat etc. Carl (carl@malamud.com)'s program
is being recorded onto conventional (well, DAT) tape professionally,
then converted to 8000 hz mu law sound (a-la Sun workstations & most
other systems). The resulting 15mb file (for a 30 minute weekly
program) will be put onto ftp.uunet.net for anonymous ftp, and will be
farmed out to many other major archive sites via ftp. From there, you
do whatever you want to do with it - ftp it over to yourself and play
it, or multicast it to your friends - whatever. It's not directly
related to the mbone, but the mbone is obiviously a potential carrier
mechanism for it.

Personally, I think this exercise looks like a great little
experiment in popularising the internet, and I think there's no need
to be concerned about negative effects on the net from this one -
it's going to be controlled ftp based distribution as the "first
tier", and whatever you want to use locally after that.

In my opinion, what makes the most sense for the mbone with this
material is to avoid mbone-ing it globally (it's already going to be
ftp-transferred over the relevant links), but the mbone might make a
lot of sense in terms of local (site wide, city wide, maybe national)
redistribution for those wanting to "listen in". I'm not sure about
all of this, it's just my personal opinion - maybe it's something
that those involved with the mbone should talk to Carl about at the
upcoming IETF - he's giving a presentation on this stuff ("Internet
Talk Radio") at the IETF, so I suggest that this is the best forum
for those with an opinion on how this may or may not mesh with the
MBONE to talk it out with him. 

I'll append an article giving more detail on this stuff (below) so
you can get a better idea of what this is all about. Carl can answer
more questions directly, but be a little patient - he's been
literally swamped with interested people.

Meanwhile if you (as an mbone network carrier/provider) has a big ftp
archive site available, I suggest getting in touch with Carl if you
want to arrange to be a second-tier archive for this material.

Cheers,
   Simon Hackett

-------------------cut here--------------------------------------

The following article is reprinted with permission from ConneXions.  ConneXions
is published by Interop Company.  More information can be obtained from the
electronic mail address ole@interop.com.

                       Internet Talk Radio
                  Carl Malamud (carl@radio.com)

	Over the past few years, two trends have come together to
present an opportunity for a new type of journalism.  On the one
hand, the trade press has focused on marketing and product
reviews, leaving an ever-larger gap for a general-interest,
technically-oriented publication focused on the Internet.  At the
same time, the Internet has made great progress in supporting
multimedia communication, through standards such as IP
multicasting and MIME messaging.

	Internet Talk Radio attempts to fuse these two trends and
form a new type of publication: a news and information service
about the Internet, distributed on the Internet.  Internet Talk
Radio is modeled on National Public Radio and has a goal of
providing in-depth technical information to the Internet
community.  The service is made initially possible with support
from Sun Microsystems and O'Reilly & Associates.  Our goal is to
provide a self-sufficient, financially viable public news service
for the Internet community.

Head: Flame of the Internet

	The product of Internet Talk Radio is an audio file,
professionally produced and freely available on computer
networks.  To produce these files, we start with the raw data of
any journalistic endeavor: speeches, conference presentations,
interviews, and essays.

	This raw information is taped using professional-quality
microphones, mixers, and DAT recorders.  The information is then
brought back to our studios, and edited and mixed with music,
voice overs, and the other elements of a radio program. The "look
and feel" we strive for is akin to "All Things Considered" or
other programs that appeal to the general interest of the
intelligent listener.

	Our goal is hit the topics that don't make it into the trade
press.  Instead of SNMP-compliant product announcements, we want
to present descriptions of SNMP.  Instead of articles on GOSIP,
we want to describe the latest Internet Drafts and place them in
perspective.  Instead of executive promotions, we want to give
summaries of mailing list activity and network stability. 
Instead of COMDEX, we want to cover the IETF.

Head: Town Crier to the Global Village

	The result of Internet Talk Radio's journalistic activities
is a series of audio files.  The native format we start with is
the Sun Microsystems .au format, closely related to the NeXT .snd
format.  This format consists of the CCITT Pulse Code Modulation
(PCM) standard of 8 bits per sample and a sampling rate of 8000
samples per second, using the u-law [ed. use greek letter mu]
encoding (a logarithmic encoding of 8 bit data equivalent to a 14
bit linear encoding).  A half-hour program would thus consist of
64,000 bits per second or 15 Mbytes total.

	Programs are initially spool on UUNET, the central machines
of the Alternet network.  Files are then moved over to various
regional networks for further distribution.  For example, EUnet,
a commercial network provider for Europe with service in 24
countries, will act as the central spooling area for the European
region.  The Internet Initiative Japan (IIJ) company will provide
the same service for Japanese networks.

	The goal of coordinated distribution is to reduce the load
on key links of the network.  Transferring a 15 Mbyte file over a
64 kbps link does not make sense during peak times.  On the other
hand, a leased line has the attribute that a bit unused is a bit
forever gone.  Transferring large files at low priority in non-
peak times has little or no incremental cost.
	
	Files thus move from the UUNET central spool area, to
regional spools, to national and local networks.  We anticipate
most of this transfer to be done using the FTP protocols, but
some networks are discussing the use of NNTP news groups and
MIME-based distribution lists.

	It is important to note that Internet Talk Radio is the
source of programming and does not control the distribution. 
These files are publicly available, subject only to the simple
license restrictions of no derivative work and no commercial
resale.  

	Distribution is controlled, as with all other data, by the
individual networks that make up the Internet.  We intend to work
closely with networks all over the world to ensure that there is
some coordination of distribution activity, but ultimate control
over this data is in the hands of those people who finance,
manage, and use networks.

	We don't believe indiscriminate use of anonymous FTP is the
proper method for distributing large archives.  Previous
experience with ITU standards, with RFC repositories, and with
large software archives such as the X Windows System indicates
that setting up a top-level distribution hierarchy goes a long
way towards alleviating network load.

	Even with a top-level hierarchy, however, there will always
be anonymous FTP sites and there will always be people that go to
the wrong FTP server.  This behavior is largely mitigated by
setting up enough "local" servers and publicizing their
existence.  Like any large distributor of data, we are mindful of
the load on the transcontinental and regional infrastructures and
will take aggressive steps to help minimize that load.

Head: Asynchronous Times, Asynchronous Radio

	Once files have made their way to a local or regional
network, they are moved to the desktop and played.  Once again
the individual users of the network decide how to present data. 
We hope to see a wide variety of different ways of having our
files played and only list a few of the more obvious methods.

	The simplest method to play a .au file on a Sparcstation is
to type "play filename."  If the file is placed on a Network File
System (NFS) file system on a central server, the user simply
mounts the file system and plays the file.  Alternatively, the
user copies the file to a local disk and plays it.

	More adventuresome playing of files uses multicasting.  A
simple multicast program called "radio" for a local Ethernet is
available from CWI, the mathematics institute of the Netherlands. 
A more sophisticated approach, IP multicasting, allows a program
to reach far beyond the confines of the Ethernet.

	IP multicasting might be used on a local basis, or can have
a global reach.  There is a consortium of regional networks that
have formed the Multicast Backbone (MBONE), used for audio and
video programming of key conferences such as the Internet
Engineering Task Force.

	Internet Talk Radio does not assume use of the MBONE for
playing files.  Needless to say, the operators of the MBONE are
free to play Internet Talk Radio files (and we would be delighted
if this happens), but it is up to the local network affiliates to
determine how and when they distribute this audio data.

	In many cases, people will want to play files on a wide
variety of different platforms.  The Sound Exchange (SOX) program
is a publicly-available utility that easily transforms a file
from one format to another.  Using this utility, the Macintosh,
Silicon Graphics, DECstation, PC, and many other platforms can
play Internet Talk Radio files.

Head: Geek of the Week

	In the spirit of dignified, conservative programming, the
first production from Internet Talk Radio is dubbed Geek of the
Week.  Geek of the Week features technical interviews with key
personalities on the Internet.  Some of the people who have
agreed to appear on Geek of the Week include Daniel Karrenberg of
the RIPE NCC, Dr. Marshall T. Rose of Dover Beach Consulting,
Milo Medin of the NASA Science Internet, and Daniel Lynch of
Interop Company.

	Geek of the Week focuses on technical issues facing the
Internet.  This initial program is sponsored by Sun Microsystems
and O'Reilly & Associates.  Their support makes it possible for
Geek of the Week to be produced professionally and then to be
distributed at no charge.

	One of the issues that Internet Talk Radio faces are the
vestiges of Appropriate Use Policies (AUPs) that linger from the
original ARPANET days.  While Sun Microsystems and O'Reilly &
Associates view Internet Talk Radio in terms of an investigation
of on-line publishing, of multicasting, and other engineering
issues, we feel it important that our sponsors are given due
credit in the programs.

	At first glance, this smacks of the crass and commercial. 
Indeed, it smacks of advertising.  Jumping to that conclusion,
however would be a simplistic mistake.  The Appropriate Use
Policies were formulated to guarantee that networks are used for
the purposes envisioned by the funding agents.  In the case of an
AUP-constrained networks such as the NSFNET, this means that use
of the network must benefit U.S. science and engineering.  

	We feel that an in-depth interview with Internet architects
clearly falls within the purview of all AUP policies.  However,
we understand that certain networks may not accept certain types
of programming.  For this reason, our central spool areas are
carefully picked so they are AUP-free.  This way, if a network
feels the programming is inappropriate, they can simply inform
their users not to obtain or play the files.

	It should be noted that one advantage of supporting the
professional dissemination of news and information up-front is
that the user is not directly charged.  Somebody has to pay for
information to be produced, and the sponsorship model means that
copy protection, accounting, security, and all the other
complications of a charging model are avoided and that high-
quality news and information becomes increasingly available on
the Internet.

Head: The Medium is the Message

	While Geek of the Week is our flagship program, we intend to
intersperse mini-features throughout.  The Incidental Tourist,
for example, will feature restaurant reviews and other travel
information for sites throughout the world.  The Internet Hall of
Flame will highlight non-linear behavior on mailing lists, and we
will have periodic book reviews by Dan Dorenberg, one of the
founders of Computer Literacy Books.

	The logical extension to Geek of the Week is to begin
coverage of industry functions.  To date, we have received
permission to tape for later rebroadcast sessions and
presentations at the European RIPE meetings, the IETF, and at the
INTEROP Conferences.  We are negotiating with other industry
forums to try and establish permission to cover additional
conferences.

	Our hope is to begin providing news summaries of these key
conferences.  If you can't make it to the IETF, for example,
Internet Talk Radio would like to provide a half-hour news
summary describing what happened on each day.

	The next logical step is to begin producing analysis of key
technical topics.  Here, we look at in-depth (e.g., 15 minute)
summaries of technical topics such as MIME, proposals for the
next IP, SNMP v. 2, or the architecture of the Global Internet
Exchange (GIX).  We would also furnish analysis of political
topics, such as the POISED effort to reorganize the Internet
standards process, or the background of the IPv7 debate.

	Eventually, our hope is to combine all these reports
together and form a daily news broadcast to the Internet.  When
you walk in and start reading your mail, you simply click on the
"radio" icon and listen to Geek of the Week while deleting
messages from the more hyperactive mailing lists.

Head: Tomorrow is the Future

	The "radio" metaphor was carefully chosen.  We wanted an
alternative to plain ASCII files, yet did not feel that the
Internet infrastructure was ready for regular video feeds. 
Production of video or true multimedia required an order-of-
magnitude higher investment in production facilities.  After all,
we know bad TV since we see so much of it.

	Eventually, Internet Talk Radio wants to go beyond the
confines of the simple radio metaphor.  Already, we describe the
service as asynchronous radio, recognizing that our listeners can
start, stop, rewind, or otherwise control the operation of the
radio station.

	As a multicasting infrastructure gets deployed throughout
the Internet, we see the opportunity to expand the radio metaphor
and begin the creation of a truly new news medium.  Multicast
groups and videoconferencing tools allow the creation of an
Internet Town Hall, a moderated forum with a very wide reach or
games shows like Name That Acronym where everybody gets to play.

	Because we are on the Internet, we can add a wide variety of
different programming techniques.  While listening to a series of
interviews about MIME messaging, for example, you might also
scroll through a series of Gopher menus that hold more
information about the MIME standards, or search a WAIS database
for a biography of the speakers.

	We hope that Internet Talk Radio will be the first of many
such information services on the Internet, supplementing the
random anarchy of news and mailing lists with professionally
produced news and information.  Indeed, we hope that Internet
Talk Radio forms the first of many "desktop broadcasting"
efforts.

	Internet Talk Radio debuts at the Columbus IETF at the end
of March.  Stay tuned for more information.

Head: For More Information

	Guido van Rossum, FAQ: Audio File Formats,
ftp.cwi.nl:/pub/AudioFormats2.10.  An excellent introduction to
audio formats, encoding, and other information about sound files
on different platforms.  This same site also has copies of the
SoundExchange (SOX) program for translating files into different
audio formats, and the Radio program for playing a sound file on
an Ethernet.




From rem-conf-request@es.net Mon Mar  8 09:57:27 1993
To: "Gerard V. Talatinian" <gtalatin@vartivar.ucs.indiana.edu>
Cc: rem-conf@es.net
Subject: Re: CERFnet Seminar notes
Date: Mon, 8 Mar 1993 09:38:44 PST
Sender: Steve Deering <deering@parc.xerox.com>
From: Steve Deering <deering@parc.xerox.com>
Status: RO
Content-Length: 266
X-Lines: 9

Gerard,

My slides from the CERFnet MBone seminar last week can be fetched from
parcftp.xerox.com:pub/net-research/cerfnet-seminar-slides.ps.Z.
In the same directory, you can find the past maps of the MBone which
I also distrubuted to the seminar attendees.

Steve


From rem-conf-request@es.net Mon Mar  8 14:04:03 1993
Date: Mon, 8 Mar 1993 15:43:14 -0600
From: dab@berserkly.cray.com (David A. Borman)
To: rem-conf@es.net
Subject: FDDI Multicast query
Content-Length: 1051
Status: RO
X-Lines: 30


Hi.  I'm asking this question here because this is where
there are probably a lot of people who know the answers
to my questions.

I have an opportunity to provide input into an FDDI card
on how to do multicast (FDDI global address) support.  What
I would like to know is what other FDDI cards provide in
terms of supporting multicast.

For example, from looking at the man page for the DEFZA
card from DEC, I infer that the hardware can recognize
up to 64 multicast address.

My specific questions are:
	1) How many multicast addresses are supported in
	   hardware by various FDDI cards?
	2) Is there a way to ask the card for all FDDI
	   global addresses (high bit set)?
	3) What's a good minimum number of addresses
	   to ask the hardware designers to support? 16?
	   32? 64? 256?
	4) If a hardware designer told you there was very
	   little room to add multicast support, what would
	   be the minimum HW functionality that you would
	   request?

Any help would be greatly appreciated, and thanks in advance.

		-David Borman, dab@cray.com

From rem-conf-request@es.net Mon Mar  8 21:11:21 1993
Date: Mon, 8 Mar 1993 22:57:54 -0600 (CST)
From: Bill Lidinsky <lidinsky@hep.net>
Subject: "ether-like" Proposals
To: rem-conf@es.net
Cc: Mark Kaletka <kaletka@dcd00.fnal.gov>
Mime-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII
Status: RO
Content-Length: 2496
X-Lines: 63



In response to the recent queries about these "Ether-like" proposals,
here is my perspective from an IEEE 802 vantage point.

Recently there have been some proposals made to IEEE Project 802.  I
see them as being classified into 3 types.


"IsoEthernet"
-------------

This proposal provides a physical layer which multiplexes a 10Mbps
CSMA/CD channel with a 6 Mbps switched Isochrononus channel.  As I
understand it, this physical layer serves 2 MACs: one CSMA/CD and one
Isochronous.  The concentrator would contain both 802.3 repeater
functionality and switch functionality (though I guess the Ethernet
activity might also be switched in high performance concentrators).
This proposal is being worked in 802.9 (IVD LANs) since both 802.3 and
802.9 like MAC functions are involved. It is similar to 802.9 in data
rate and in having isochronous channels.  The proposal was presented
to 802.9 in November.

Full Duplex Ethernet
--------------------

This is a proposal to modify 802.3 so that by using switching hubs and
modified adapters, the full duplex nature of some 802.3 PMD's can be utilized.
10BASE-T, 10BASE-FL and 10BASE-FB all use physical media which can support
simultaneous transmit and receive.  If the MAC is modified to remove CSMA/CD
access rules and the physical layer modified to pass on the full duplex
transmission supported by the media, this can be done.  Some products for
non-standard connecting links between bridges over longer distances than
CSMA/CD allows already do this using microwave or single mode fiber.   This
proposal is more directed at doing it to end nodes to get a performance
improvement by allowing simultaneous transmission and reception.

This proposal is more closely tied to 802.3, so it is being discussed in 802.3,
currently in the higher speed study group.  It is not a CSMA/CD MAC, but
it does use CSMA/CD physical and MAC layers with small modifications to
each.

100 Mbps ProposalS
------------------

Yes, there are three.

Shortly before the November 1992 meeting of IEEE 802, several
proposals for 100 Mbps Workstation LANs came to the committee's
attention.  Two use an 802.3-like CSMA/CD MAC.  One uses a new MAC.
All were aimed at supporting twisted pair.  These are being reviewed
within a study group in 802.3.


This work is in its early stages of standardization.  Input is
requested.  Since I chair 802.1 and sit on the 802 Executive
Committee, I will be happy to act as a conduit and sounding board for
thoughts and ideas.






From rem-conf-request@es.net Wed Mar 10 13:57:00 1993
Date: Wed, 10 Mar 1993 16:45:32 -0600
To: rem-conf@es.net
From: arm@aqua.whoi.edu (Andrew Maffei)
Subject: Real time telemetry on the mbone
Content-Length: 1695
Status: RO
X-Lines: 39

Friends,

For the past three weeks the Woods Hole Oceanographic Institution has 
been running a 56kbaud IP connection to a LAN on board the research 
vessel Laney Choest off the Baja peninsula.  The Laney is hosting the 
Jason Project this year and we have had the amazing opportunity to have 
access at little direct cost. The experiment will only continue through 
Friday, March 12, 1993.  After that, the ship comes back to port and 
the data feed ends.  

This experiment is being used as a "proof of concept", the concept 
being that oceanographic research vessels and other remote scientific 
platforms, as national research resources, should be on the Internet.

The SeaNet project, being coordinated by the Joint Oceanographic 
Institutions Incorporated (JOI) is developing this idea.  We 
have even registered a new Internet domain -- seanet.int, emphasizing 
the international nature of this endeavor.

Real time data is being multicast from the ship almost 24 hours a day.  
Visualization software, provided by NASA, SGI, and the Woods Hole 
Oceanographic Institution is being made available for those of you who 
want to try it out.  The software will only run on SGI or SparcStation 
workstations.

The distribution and a frequently asked question document (faq.txt) can 
be found on the machine jargon.whoi.edu in the anonymous ftp directory 
pub/mjason_dist.  A README file explains how to get things up and 
running.

If you want to get regular scientific updates of how the project is going
from a scientific vantagepoint send the single line message

SUBSCRIBE JASON FirstName LastName

to listserv@cerf.net

Have fun.  Please read the faq and README before asking questions.


From rem-conf-request@es.net Thu Mar 11 08:34:13 1993
Date: Thu, 11 Mar 1993 16:55:19 +0100
From: Thierry Turletti <Thierry.Turletti@sophia.inria.fr>
To: rem-conf@es.net
Subject: New Internet draft
Content-Length: 29255
Status: RO
X-Lines: 894


Here's a draft of an RFC defining a packetisation scheme of H.261 video
streams.

If you have got any comments, they'd be much appreciated.

Thierry

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~




          Internet draft                     Packetization of H.261


                                  Packetization
                                       of
                               H.261 video streams



                                 Mon Mar 8, 1993

                       Thierry Turletti, Christian Huitema
                                      INRIA

                        Christian.Huitema@sophia.inria.fr
                         Thierry.Turletti@sophia.inria.fr






          1.  Status of this Memo

          This document is an Internet draft. Internet drafts are
          working documents of the Internet Engineering Task Force
          (IETF), its Areas, and its Working Groups. Note that other
          groups may also distribute working documents as Internet
          Drafts).

          Internet Drafts are draft documents valid for a maximum of six
          months. Internet Drafts may be updated, replaced, or obsoleted
          by other documents at any time. It is not appropriate to use
          Internet Drafts as reference material or to cite them other
          than as a "working draft" or "work in progress".

          Please check the I-D abstract listing contained in each
          Internet Draft directory to learn the current status of this
          or any other Internet Draft.

          Distribution of this document is unlimited.












          Turletti, Huitema                                     [Page 1]





          Internet draft                     Packetization of H.261


          2.  Purpose of this document

          The CCITT recommendation H.261 [1] specifies the encodings
          used by CCITT compliant video-conference codecs. Although
          these encodings were originally specified for fixed data rate
          ISDN circuits, experimentations [2]  have shown that they can
          also be used over the internet.

          The purpose of this memo is to specify how H.261 video streams
          can be carried over UDP and IP, using the RTP protocol [3].


          3.  Structure of the packet stream

          H.261 codecs produce a bit stream. In fact, H.261 and
          companion recommendations specifies several levels of
          encoding:

          (1)  Images are first separated in blocks of 8x8 pixels.
               Blocks which have moved are encoded by computing the
               discrete cosine transform (DCT) of their coefficients,
               which are then quantized and Huffman encoded.

          (2)  The bits resulting of the Huffman encoding are then
               arranged in 512 bits frames, containing 2 bits of
               synchronization, 492 bits of data and 18 bits of error
               correcting code.

          (3)  The 512 bits frames are then interlaced with an audio
               stream and transmitted over px64 kbps circuits according
               to specification H.261.

          When transmitting over the Internet, we will directly consider
          the output of the Huffman encoding. We will not carry the 512
          bits frames, as protection against errors can be obtained by
          other means. Similarly, we will not attempt to multiplex audio
          and video signals in the same packets, as UDP and RTP provide
          a much more efficient way to achieve multiplexing.

          Directly transmitting the result of the Huffman encoding over
          an unreliable stream of UDP datagrams would however have very
          poor error resistance characteristics. The H.261 coding is in
          fact organized as a sequence of images, or frames, which are
          themselves organized as a set of Groups of Blocks (GOB). Each
          GOB holds a set of 3 lines of 11 macro blocs (MB). Each MB





          Turletti, Huitema                                     [Page 2]





          Internet draft                     Packetization of H.261


          carries information on a group of 16x16 pixels: luminance
          information is specified for 4 blocks of 8x8 pixels, while
          chrominance information is only given by two 8x8 "red" and
          "blue" blocks.

          This grouping is used to specify informations at each level of
          the hierarchy:

          -    At the frame level, one specifies informations such as
               the delay from the previous frame, the image format, and
               various indicators.

          -    At the GOB level, one specifies the GOB number and the
               default quantifier that will be used for the MBs.

          -    At the MB level, one specifies which blocks are presents
               and which did not change, and optionally a quantifier, as
               well as precisions on the codings such as distance
               vectors.

          The result of this structure is that one need to receive the
          informations present in the frame header to decode the GOBs,
          as well as the informations present in the GOB header to
          decode the MBs. Without precautions, this would mean that one
          has to receive all the packets that carry an image in order to
          properly decode its components. In fact, the experience as
          shown that:

          (1)  It would be unrealistic to carry an image on a single
               packet: video images can sometime be very large.

          (2)  GOB informations would most often be correctly sized to
               fit in a packet. In fact, several GOBs can often be
               grouped in a packet.

          Once we have take the decision to correlate GOB
          synchronization and packetization, a number of decisions
          remain to be taken, due to the following conditions:

          (1)  The algorithm should be easy to implement when
               packetizing the output stream of an hardware codec.

          (2)  The algorithm should not induce rendition delays -- we
               should not have to wait for a following packet to display
               an image.





          Turletti, Huitema                                     [Page 3]





          Internet draft                     Packetization of H.261


          (3)  The algorithm should allow for efficient
               resynchronization in case of packet losses.

          (4)  It should be easy to depacketize the data stream and
               direct it to an hardware codec's input.

          (5)  When the hardware decoder operates at a fixed bit rate,
               one should be able to maintain synchronization, e.g. by
               adding padding bits when the packet arrival rate is
               slower than the bit rate.

          The H.261 Huffmans encoding includes a special "GOB start"
          pattern, composed of 15 zeroes followed by a single 1, that
          cannot be imitated by any other code words. That patterns mark
          the separation between two GOBs, and is in fact used as an
          indicator that the current GOB is terminated. The encoding
          also include a stuffing pattern, composed of seven zeroes
          followed by four ones; that stuffing pattern can only be
          entered between the encoding of MBs, or just before the GOB
          separator.

          The first conclusion of the analysis is that the packets
          should contain all the GOB data, including the "GOB start"
          pattern that separate the current block from its follower. In
          fact, as this pattern is well known, we could as well use a
          single bit in the data header to indicate it's presence.

          Not encoding the GOB-start pattern has two advantages:

          (1)  It reduces the number of bits in the packets, and avoids
               the possibility of splitting packets in the middle of a
               GOB separator.

          (2)  It authorize gateways to hardware decoders to insert the
               stuffing pattern in front of the GOB, in order to meet
               the fixed bit rate requirement.

          Another problem posed by the specificities of the H.261
          compression is that the GOB data have no particular reason to
          fit in an integer number of octets.  The data header will thus
          contain two three bits integers, EBIT and SBIT:

          SBIT indicates the number of bits that should be ignored in
               the first (start) data octet.






          Turletti, Huitema                                     [Page 4]





          Internet draft                     Packetization of H.261


          EBIT indicates the number of bits that should be ignored in
               the last (end) data octet.

          Although only the EBIT counter would really be needed for
          software coders, the SBIT counter was inserted to ease the
          packetization of hardware coders output.  An sample
          packetization procedure is found in annex A.

          At the receiving sites, the GOB synchronization can be used in
          conjunction with the synchronization service of the RTP
          protocol. In case of losses, the decoders could become
          desynchronized. The "S" bit of the RTP header will be set to
          indicate that the packet includes the beginning of the
          encoding of a GOB, i.e. the quantifier common to all macro
          blocks. The receiver will detect losses by looking at the RTP
          sequence numbers. In case of losses, it will ignore all
          packets whose "S" bit is null. Once an S bit packet has been
          received, it will prepend the GOB start code to that packet,
          and resume decoding.

          A example packetization program is given in Appendix A.





























          Turletti, Huitema                                     [Page 5]





          Internet draft                     Packetization of H.261


          4.  Usage of RTP

          The H.261 informations are carried as data within the RTP
          protocol, using the following informations:

                  _____________________________________________
                 | Ver       |   Protocol version (1).        |
                 |___________|________________________________|
                 | Flow      |   Identifies one particular    |
                 |           |   video stream.                |
                 |___________|________________________________|
                 | Content   |   H.261 encoded video (31).    |
                 |___________|________________________________|
                 | Sequence  |   Identifies the packet within |
                 | number    |   a stream                     |
                 |___________|________________________________|
                 | Sync      |   Set if the packet is         |
                 |           |   synchronized on an image or  |
                 |           |   on a group of blocks.        |
                 |___________|________________________________|
                 | Timestamp |   The date at which the        |
                 |           |   image was grabbed.           |
                 |___________|________________________________|


          The very definition of this settings implies that the
          beginning of an image shall always be synchronized with a
          packet. The RTP sequence number can be used to detect missing
          packets. In this case, one shall ignore all incomings packets
          until the next synchronization mark is received. The H.261
          data will follow the RTP options, as in:

            0                   1                   2                   3
            0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
           |Ver| flow      |F|S|  content  | sequence number               |
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
           | timestamp (seconds)           | timestamp (fraction)          |
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
           .                                                               .
           .                    RTP options (optional)                     .
           .                                                               .
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
           |        H.261  options         |         H.261 stream...       |
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+





          Turletti, Huitema                                     [Page 6]





          Internet draft                     Packetization of H.261


          The H.261 options field is defined as following:

            0                   1
            0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
           |S|SBIT |E|EBIT |C|I|V|0|  FMT  |
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

             _______________________________________________________
            | S (1 bit)     |   Start of GOB. Set if               |
            |               |   the packet is a start of GOB.      |
            |_______________|______________________________________|
            | SBIT (3 bits) |   Start bit position                 |
            |               |   number of bits that should         |
            |               |   be ignored in the first            |
            |               |   (start) data octet.                |
            |_______________|______________________________________|
            | E (1 bit)     |   End of GOB. Set if                 |
            |               |   the packet is an end of GOB.       |
            |_______________|______________________________________|
            | EBIT (3 bits) |   End bit position                   |
            |               |   number of bits that should         |
            |               |   be ignored in the last             |
            |               |   (end) data octet.                  |
            |_______________|______________________________________|
            | C (1 bit)     |   Color flag. Set if                 |
            |               |   color is encoded.                  |
            |_______________|______________________________________|
            | I (1 bit)     |   Full Intra Image flag.             |
            |               |   Set if it is the first packet      |
            |               |   of a full intra image.             |
            |_______________|______________________________________|
            | V (1 bit)     |   movement Vector flag.              |
            |               |   Set if movement vectors            |
            |               |   are encoded.                       |
            |_______________|______________________________________|
            | FMT (4 bits)  |   Image format:                      |
            |               |   QCIF, CIF or number of CIF in SCIF.|
            |_______________|______________________________________|


          The image format (4 bits) is defined as following:








          Turletti, Huitema                                     [Page 7]





          Internet draft                     Packetization of H.261


                          _____________________________
                         | QCIF               |   0000|
                         |____________________|_______|
                         | CIF                |   0001|
                         |____________________|_______|
                         | SCIF 0             |       |
                         | upper left corner  |   0100|
                         | CIF in SCIF image  |       |
                         |____________________|_______|
                         | SCIF 1             |       |
                         | upper right corner |   0101|
                         | CIF in SCIF image  |       |
                         |____________________|_______|
                         | SCIF 2             |       |
                         | lower left corner  |   0110|
                         | CIF in SCIF image  |       |
                         |____________________|_______|
                         | SCIF 3             |       |
                         | lower right corner |   0111|
                         | CIF in SCIF image  |       |
                         |____________________|_______|





























          Turletti, Huitema                                     [Page 8]





          Internet draft                     Packetization of H.261


          5.  Usage of RTP parameters

          When sending or receiving H.261 streams through the RTP
          protocol, the stations should be ready to:

          (1)  process or ignore all generic RTP parameters,

          (2)  send or receive H.261 specific "Reverse Application Data"
               parameters, to request a video resynchronization.

          This memo describes two "RAD" item types, "Full Intra Request"
          and "Negative Acknowledge".

          5.1.  Controlling the reverse flow

          Support of the reverse application data by the H.261 sender is
          optional; in particular, early experiments have shown that the
          usage of this feature could have very negative effects when
          the number of recipients is very large.

          Recipients learn the return address where RAD informations may
          be sent from the Content description (CDESC) item, which may
          be included as an RTP option in any of the video packets. The
          CDESC item includes a Return port number value. A value of
          zero indicates that no reverse control information should be
          returned.

          A recipient shall never send a RAD item if it has not yet
          received a CDESC item from the source, or if the port number
          received in the last CDESC item was null.

          Emitters should identify themselves by sending CDESC items at
          regular intervals.

          5.2.  Full Intra Request

          The "Full Intra Request" items are identified by the item Type
          "FIR" (0).

            0                   1                   2                   3
            0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
           |F|    RAD      |  length = 1   |   Type        | Z |   Flow    |
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+






          Turletti, Huitema                                     [Page 9]





          Internet draft                     Packetization of H.261


          These packets indicate that a recipient has lost all video
          synchronization, and request the emitter to send the next
          image in "Intra" coding mode, i.e.  without using differential
          coding. The various fields are defined as follow:

                 ________________________________________________
                | F      |   Last option bit, as defined by RTP.|
                |________|______________________________________|
                | RAD    |   RAD option type (65)               |
                |________|______________________________________|
                | Length |   One 32 bits word.                  |
                |________|______________________________________|
                | Type   |   FIR (0).                           |
                |________|______________________________________|
                |  Z     |   Must be zero                       |
                |________|______________________________________|
                | Flow   |   The flow id of the incoming packets|
                |________|______________________________________|


          5.3.  Negative Acknowledge

          Packet losses are detected using the RTP sequence number.
          After a packet loss, the receiver will resynchronize on the
          next GOB. However, as H.261 uses differential encoding, parts
          of the images may remain blurred for a rather long time.

          As all GOB belonging to a given video image carry the same
          time stamp, the receiver can determine a list of GOBs which
          were effectively received for that time stamp, and thus
          identify the "missing blocks". Requesting a specific
          reinitialization of these missing blocks is more efficient
          than requesting a complete reinitialization of the image
          through the "Full Intra Request" item.
















          Turletti, Huitema                                    [Page 10]





          Internet draft                     Packetization of H.261


          The format of the video-nack option is as follow:

            0                   1                   2                   3
            0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
           |F|    RAD      |  length = 3   |   Type        | Z |   Flow    |
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
           |     FGOBL     |     LGOBL     |    MBZ                |  FMT  |
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
           | timestamp (seconds)           | timestamp (fraction)          |
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

          The different fields have the following values:

             ________________________________________________________
            | F             |   Last option bit, as defined by RTP. |
            |_______________|_______________________________________|
            | RAD           |   RAD option type (65)                |
            | -             |                                       |
            | Length        |   Three 32 bits word.                 |
            |_______________|_______________________________________|
            | Type          |   NACK (1).                           |
            |_______________|_______________________________________|
            | MBZ           |   Must be zero                        |
            |_______________|_______________________________________|
            | Flow          |   The flow id of the incoming packets |
            |_______________|_______________________________________|
            | FGOBL         |   First GOB Lost:                     |
            |               |  Identifies the first GOB lost number.|
            |_______________|_______________________________________|
            | LGOBL         |   Last GOB Lost:                      |
            |               |   Identifies the last GOB lost number.|
            |_______________|_______________________________________|
            | MBZ           |   Must be zero                        |
            |_______________|_______________________________________|
            | FMT           |   Repeat the format indicator of the  |
            |               |   received image, including the number|
            |               |   of the SCIF subimage if present.    |
            |_______________|_______________________________________|
            | Timestamp     |   The RTP timestamp of the            |
            | original image|                                       |
            |_______________|_______________________________________|








          Turletti, Huitema                                    [Page 11]





          Internet draft                     Packetization of H.261


          6.  References

          [1]  Video codec for audiovisual services at p x 64 kbit/s
               CCITT Recommendation H.261.

          [2]  Thierry Turletti. H.261 software codec for
               videoconferencing over the Internet INRIA Research Report
               no 1834

          [3]  Henning Schulzrinne A Transport Protocol for Real-Time
               Applications INTERNET-DRAFT, December 15, 1992.







































          Turletti, Huitema                                    [Page 12]





          Internet draft                     Packetization of H.261


          Appendix A

          The following code can be used to packetize the output of an
          H.261 codec:

          #include <stdio.h>

          #define BUFFER_MAX 512

          int right[] = {
             8,7,6,6,5,5,5,5,4,4,4,4,4,4,4,4,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,
             2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,
             1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
             1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
             0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
             0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
             0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
             0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};

          int left[] = {
             8,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,
             5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,
             6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,
             5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,
             7,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,
             5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,
             6,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,
             5,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0,4,0,1,0,2,0,1,0,3,0,1,0,2,0,1,0};

          h261_sync(F)
             FILE *F;
          {
             int  i, ebit, sbit, start_of_group, end_of_group,
                   c, nz;
             unsigned char buf[BUFFER_MAX];
             int *left, *right;

             i = 0;
             ebit = 0;
             sbit = 0;
             start_of_group = 1;
             nz = 0;
             while (c = getc(F)) {
                buf[i++] = c;
                if (c == 0) {





          Turletti, Huitema                                    [Page 13]





          Internet draft                     Packetization of H.261


                   nz += 8;
                } else {
                   nz += right[c];
                   end_of_group = 1;
                   if (nz >= 15) {
                      if (right[c] == 7) {
                         ebit = 0;
                         send_message(buf, i - 2, sbit, ebit,
                            end_of_group, start_of_group);
                         sbit = 0;
                         i = 0;
                      } else {
                         ebit = 7 - right[c];
                         send_message(buf, i - 2, sbit, ebit,
                            end_of_group, start_of_group);
                         i = 0;
                         buf[i++] = c;
                         sbit = right[c] + 1;
                      }
                      start_of_group = 1;
                   } else {
                      nz = left[c];
                      if (i >= BUFFER_MAX) {
                         ebit = 0;
                         end_of_group = 0;
                         send_message(buf, i - 2, sbit, ebit,
                            end_of_group, start_of_group);
                         buf[0] = buf[i - 2];
                         buf[1] = buf[i - 1];
                         i = 2;
                         sbit = 0;
                         start_of_group = 0;
                      }
                   }
                }
             }
          }













          Turletti, Huitema                                    [Page 14]





          Internet draft                     Packetization of H.261


          Table of Contents


          1 Status of this Memo ...................................    1
          2 Purpose of this document ..............................    2
          3 Structure of the packet stream ........................    2
          4 Usage of RTP ..........................................    6
          5 Usage of RTP parameters ...............................    9
          5.1 Controlling the reverse flow ........................    9
          5.2 Full Intra Request ..................................    9
          5.3 Negative Acknowledge ................................   10
          6 References ............................................   12
           Appendix A .............................................   13





































          Turletti, Huitema                                    [Page 15]


From rem-conf-request@es.net Thu Mar 11 16:04:14 1993
To: rem-conf@osi-west.es.net
Subject: Multicast Tunnel between sneezy.lanl.gov and vet.ee.lbl.gov
Date: Thu, 11 Mar 93 16:49:50 -0700
From: Philip Wood <cpw@sneezy.lanl.gov>
Content-Length: 170
Status: RO
X-Lines: 9

Van,

I'm jumping ship to peer with 128.55.128.181.  Please feel free
to remove the entry which specifies a tunnel between 128.3.112.48
and 128.165.114.1.

Thanks,

Phil

From rem-conf-request@es.net Thu Mar 11 19:17:46 1993
Date: Thu, 11 Mar 93 18:45:57 PST
From: ari@es.net (Ari Ollikainen)
To: rem-conf@es.net, lidinsky@hep.net
Subject: Re: "ether-like" Proposals
Status: RO
Content-Length: 3737
X-Lines: 89

Bill Lidinsky writes:
> 
> In response to the recent queries about these "Ether-like" proposals,
> here is my perspective from an IEEE 802 vantage point.
> 
> Recently there have been some proposals made to IEEE Project 802.  I
> see them as being classified into 3 types.
> 
> 
> "IsoEthernet"
   [deleted text]

> Full Duplex Ethernet
   [deleted text]

> 100 Mbps ProposalS
   [deleted text]

> This work is in its early stages of standardization.  Input is
> requested.  Since I chair 802.1 and sit on the 802 Executive
> Committee, I will be happy to act as a conduit and sounding board for
> thoughts and ideas.
> 

I know I'm going to regret this ... Especially since it's too early for 
April 1, and the person being quoted seems sincere...

Where does the University of Missouri's "Local Multimedia Network" (LMN)
fit into the scheme of things? As reported in  NETWORK WORLD March 1. 1993 
(pp 13, 23):

"  University develops new multimedia LAN scheme 

Kansas City, Mo. -- Users looking to implement real-time video and multi-
media applications over LANs may get some help in the near future.

The Center for Telecomputing Research (CTR) at the University of Missouri 
here has developed a local-area network, the Local Multimedia Network (LMN),
that works over a traditional Ethernet bus topology but ensures bandwidth
on demand and real-time data delivery by obviating the need for Ethernet's 
Carrier Sense Multiple Access with Collision Detection signaling scheme.

LMN, which runs at speeds up to 50M bit/sec over traditional copper wiring
and up to 150M bit/sec over fiber, uses a 53-byte packet, enabling it to
integrate with future Asynchronous Transfer Mode network implementations,
according to Upkar Varshney, a research associate at CTR.

"Using [LMN], there will be no need for translation or bridging, so there 
will be no delay going into the wide area," he said.

In the LMN scheme, one LAN station acts as a system monitor, controlling 
access to the LAN. The monitor sends control packets that track LAN 
utilization across the net every 6 msec.

When a workstation needs to send, a video clip, for example, workstation 
software determines the needed bandwidth and requests it from the system 
monitor, which approves or denies the request based on availability.

Administrators can determine how long a workstation must wait to have a 
request approved, usually about two seconds. If the request is still
denied, they can set up an appropriate interval to wait before trying to 
resend the data.

According to Varshney, LMN improves on other schemes, such as certain
100B bit/sec Ethernet proposals, because video quality does not degrade 
as more users are added to the network and collisions increase. It also 
supports distances of 5 to 6 km end to end, whereas some 100M bit/sec 
Ethernet proposals limit the LAN to 250 meters in diameter.

LMN improves over technologies that add an isochronous channel to packet-
based LANs because it enables users to take advantage of all available 
bandwidth for any information type -- voice, video, or data.

Varshney said CTR is seeking vendor support for developing LMN adapters, 
which could be available as early as next year. He declined to name the 
vendors.  "

------

I hope there's something with more technical content available which 
decribes LMN!


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ari Ollikainen    ari@es.net     National Energy Research Supercomputer Center
ESnet (Energy Sciences Network)   Lawrence Livermore National Laboratory       
510-423-5962  FAX:510-423-8744   P.O. BOX 5509, MS L-561, Livermore, CA 94550  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


From rem-conf-request@es.net Fri Mar 12 12:47:14 1993
From: Ron Frederick <frederic@parc.xerox.com>
To: rem-conf@es.net
Subject: Sample RTP header file
Date: Fri, 12 Mar 1993 12:35:34 PST
Content-Length: 4937
Status: RO
X-Lines: 148

Hello everyone...

I have been working on making my packet video program 'nv' use the
evolving RTP transport protocol, and I put together a header file which is
based on the latest version of the RTP Internet Draft I found at UMass.
Since I haven't seen anything like this online yet, I thought it made sense
to send it out to the list... Corrections, additions, or other comments are
welcome. It would be nice if we could come up with a single header file
that everyone could use when writing RTP applications.

Note that as I have written it, the header file probably won't work right
on little-endian machines. That'll probably need to be dealt with in the
final version...

--- cut here ---
/*
 * rtp.h
 *
 * Constants and structures based on the 12/15/92 draft of the RTP protocol
 * Internet Draft. This information is still subject to change.
 *
 */

#ifndef _rtp_h
#define _rtp_h

/* Basic RTP header */
struct rtphdr {
	u_char	rh_vers:2;	/* version */
	u_char	rh_flow:6;	/* flow id */
	u_char	rh_opts:1;	/* options present */
	u_char	rh_sync:1;	/* end of synchronization unit */
	u_char	rh_content:6;	/* content id */
	u_short	rh_seq;		/* sequence number */
	u_long	rh_ts;		/* time stamp (middle of NTP timestamp) */
};

/* Basic RTP option header */
struct rtpopthdr {
	u_char	roh_fin:1;	/* final option flag */
	u_char	roh_type:7;	/* option type */
	u_char	roh_optlen;	/* option len */
};

/* Normal RTP options */
#define RTPOPT_CSRC	0	/* Content source */
#define RTPOPT_SSRC	1	/* Synchronization source */
#define RTPOPT_BOP	2	/* Beginning of playout unit */

/* RTP source (CSRC, SSRC) option header */
struct rtpsrchdr {
	u_char	rsh_fin:1;	/* final option flag */
	u_char	rsh_type:7;	/* option type */
	u_char	rsh_optlen;	/* option len (== 2) */
	u_short	rsh_uid;	/* unique id within host */
	u_long	rsh_addr;	/* IP address of host */
};

/* RTP BOP option header */
struct rtpbophdr {
	u_char	rbh_fin:1;	/* final option flag */
	u_char	rbh_type:7;	/* option type */
	u_char	rbh_optlen;	/* option len (== 1) */
	u_short	rbh_seq;	/* sequence number of BOP */
};

/* RTCP forward direction options */
#define RTPOPT_CDESC	32	/* Content description */
#define RTPOPT_SDESC	33	/* Source description */
#define RTPOPT_FDESC	34	/* Flow description */
#define RTPOPT_BYE	35	/* Conference exit notification */

/* RTCP CDESC option header */
struct rtcpcdeschdr {
	u_char	rtch_fin:1;	/* final option flag */
	u_char	rtch_type:7;	/* option type */
	u_char	rtch_optlen;	/* option len */
	u_char	rtch_x1:2;	/* reserved (must be 0) */
	u_char	rtch_content:6;	/* content id */
	u_char	rtch_x2;	/* reserved (must be 0) */
	u_short	rtch_rport;	/* return port */
	u_short	rtch_cqual;	/* clock quality */
	u_short	rtch_x3;	/* reserved (must be 0) */
	u_long	rtch_cdesc;	/* content descriptor */
};

/* RTCP SDESC option header */
struct rtcpsdeschdr {
	u_char	rtsh_fin:1;	/* final option flag */
	u_char	rtsh_type:7;	/* option type */
	u_char	rtsh_optlen;	/* option len */
	u_short	rtsh_uid;	/* unique id within host */
	u_long	rtsh_addr;	/* IP address of host */
};

/* RTCP BYE option header */
struct rtcpbyehdr {
	u_char	rtbh_fin:1;	/* final option flag */
	u_char	rtbh_type:7;	/* option type */
	u_char	rtbh_optlen;	/* option len */
	u_short	rtbh_uid;	/* unique id within host */
	u_long	rtbh_addr;	/* IP address of host */
};

/* RTCP reverse direction options */
#define RTPOPT_QOS	64	/* Quality of service */
#define RTPOPT_RAD	65	/* Raw application data */

/* Basic RTCP reverse packet header */
struct rtcprevhdr {
	u_char	rtrh_flow;	/* flow id */
	u_char	rtrh_x1;	/* reserved (must be 0) */
	u_char	rtrh_x2;	/* reserved (must be 0) */
	u_char	rtrh_x3;	/* reserved (must be 0) */
};

/* RTCP QOS option header */
struct rtcpqoshdr {
	u_char	rtqh_fin:1;	/* final option flag */
	u_char	rtqh_type:7;	/* option type */
	u_char	rtqh_optlen;	/* option len (== 5) */
	u_short	rtqh_uid;	/* unique id within host */
	u_long	rtqh_addr;	/* IP address of host */
	u_short	rtqh_precv;	/* packets received */
	u_short	rtqh_seqrange;	/* sequence number range */
	u_short	rtqh_mindel;	/* minimum delay */
	u_short	rtqh_maxdel;	/* maximum delay */
	u_short	rtqh_avgdel;	/* average delay */
	u_short	rtqh_x;		/* reserved (must be 0) */
};

/* RTP standard content encodings for audio */
#define RTPCONT_PCMU		0	/* 8kHz PCM mu-law mono */
#define RTPCONT_1016		1	/* 8kHz CELP (Fed Std 1016) mono */
#define RTPCONT_G721		2	/* 8kHz G.721 ADPCM mono */
#define RTPCONT_GSM		3	/* 8kHz GSM mono */
#define RTPCONT_G723		4	/* 8kHz G.723 ADPCM mono */
#define RTPCONT_DVI		5	/* 8kHz Intel DVI ADPCM mono */
#define RTPCONT_L16_16		6	/* 16kHz 16-bit linear mono */
#define RTPCONT_L16_44_2	7	/* 44.1kHz 16-bit linear stereo */

/* RTP standard content encodings for video */
#define RTPCONT_NV		28	/* Xerox PARC nv */
#define RTPCONT_DVC		29	/* BBN dvc */
#define RTPCONT_BOLT		30	/* Bolter */
#define RTPCONT_H261		31	/* CCITT H.261 */

#endif _rtp_h

From rem-conf-request@es.net Mon Mar 15 07:22:56 1993
Mime-Version: 1.0
Content-Type: Multipart/Mixed; Boundary="NextPart"
To: IETF-Announce:;@es.net
Cc: rem-conf@es.net
From: Internet-Drafts@CNRI.Reston.VA.US
Reply-To: Internet-Drafts@CNRI.Reston.VA.US
Subject: ID ACTION:draft-ietf-avt-video-packet-00.txt
Date: Mon, 15 Mar 93 10:05:24 -0500
Sender: cclark@CNRI.Reston.VA.US
Content-Length: 2827
Status: RO
X-Lines: 77

--NextPart

Note:  This announcement reflects a new pathname for the document 
       announced earlier as draft-turletti-video-packet-00.txt

A New Internet Draft is available from the on-line Internet-Drafts 
directories. This draft is a work item of the Audio/Video Transport 
Working Group of the IETF.                                            

       Title     : Packetization of H.261 video streams               
       Author(s) : T. Turletti, C. Huitema
       Filename  : draft-ietf-avt-video-packet-00.txt
       Pages     : 15

The CCITT recommendation H.261 specifies the encodings used by CCITT 
compliant video-conference codecs. Although these encodings were 
originally specified for fixed data rate ISDN circuits, 
experimentations have shown that they can also be used over the 
internet.                  

The purpose of this memo is to specify how H.261 video streams 
can be carried over UDP and IP, using the RTP protocol.                                                             

Internet-Drafts are available by anonymous FTP.  Login with the	
username "anonymous" and password "guest".  After logging in,
Type "cd internet-drafts".
     "get draft-ietf-avt-video-packet-00.txt".
 
Internet-Drafts directories are located at:	
	                                                
     o  East Coast (US)                          
        Address:  nnsc.nsf.net (128.89.1.178)	
	                                                
     o  West Coast (US)                          
        Address:  ftp.nisc.sri.com (192.33.33.22)
							
     o  Pacific Rim                              
        Address:  munnari.oz.au (128.250.1.21)	
	                                                
     o  Europe                                   
        Address:  nic.nordu.net (192.36.148.17)	
	                                                
Internet-Drafts are also available by mail.	
	                                                
Send a message to:  mail-server@nisc.sri.com. In the body type: 
     "SEND draft-ietf-avt-video-packet-00.txt".
							
For questions, please mail to internet-drafts@cnri.reston.va.us.
							

Below is the data which will enable a MIME compliant Mail Reader 
implementation to automatically retrieve the ASCII version
of the Internet Draft.

--NextPart
Content-Type: Multipart/Alternative; Boundary="OtherAccess"

--OtherAccess
Content-Type:  Message/External-body;
        access-type="mail-server";
        server="mail-server@nisc.sri.com"

Content-Type: text/plain

SEND draft-ietf-avt-video-packet-00.txt

--OtherAccess
Content-Type:   Message/External-body;
        name="draft-ietf-avt-video-packet-00.txt";
        site="nnsc.nsf.net";
        access-type="anon-ftp";
        directory="internet-drafts"

Content-Type: text/plain

--OtherAccess--
--NextPart--

From rem-conf-request@es.net Mon Mar 15 07:23:06 1993
To: rem-conf@es.net
Subject: DEC-5000 versions of vat, sd & nv available for anonymous ftp
Date: Mon, 15 Mar 93 06:37:03 PST
From: Van Jacobson <van@ee.lbl.gov>
Content-Length: 1787
Status: RO
X-Lines: 42

Alpha-quality versions of vat, sd, & nv for DEC-5000 series
workstations are available for anonymous ftp from ftp.ee.lbl.gov
as files dec-vat.tar.Z, dec-sd.tar.Z and dec-nv.tar.Z.  We would
like to have these things working reasonably well by IETF which
is why we would be grateful to people that could try them now
and let us know of problems (via email to vat@ee.lbl.gov).  Thanks.

Attached is the distribution README.

 - Van Jacobson & Steve McCanne

ps- We are working on SGI versions of sd & vat.  With luck
    we'll have them out for ftp by the end of the week.

 -------------

Mon Mar 15 06:04:51 PST 1993

These are very early versions of ports of vat (the LBL audio
tool), sd (the LBL session directory) and nv (the PARC network
video tool) to the DEC 3max (5000 series) workstation.  So far
as we know these things work but they have had very little
testing (many thanks to George Michaelson <G.Michaelson@cc.uq.oz.au>
for the testing they have had).  Please let us know of any
problems, questions, suggestions, etc., via mail to
vat@ee.lbl.gov.

Sd & nv need no additional pieces.  Vat requires the DEC CRL AudioFile
audio server (available via anonymous ftp from crl.dec.com in
pub/DEC/AF).  While any of these tools will work using point-to-
point IP unicast, to get much value out of them you really should
put IP multicast into your kernel (multicast for Ultrix is available
for anonymous ftp from gregorio.stanford.edu in directory vmtp-ip).

Vat should work with any audio device suported by AudioFile.  Nv
will work receive-only on any 5000 or 3000 series and can also
send video from workstations equipped with the TX/PIP frame grabber
(the video input device that DECSpin uses).

Remember, these are alpha versions.  Good luck.

  - Van Jacobson & Steve McCanne.

From rem-conf-request@es.net Mon Mar 15 09:34:29 1993
Date: Mon, 15 Mar 93 11:22:49 EST
From: hgs@research.att.com (Henning G. Schulzrinne)
To: srv-location@apple.com, rem-conf@es.net, scott@ftp.com
Subject: draft on resource location
Content-Length: 944
Status: RO
X-Lines: 22

A few quick comments on the I-D 'Resource Location Protocol' just announced:
- it would be nice to add conferencing gateways as a resource_type, beyond
  printer, modem, etc. They exist right now and would be a good candidate
  for resource location.

- Examples for legal attributes (section 4.1) would be helpful.
- authentication is very incomplete; public-key versions are missing
- not clear where the resource database cookie appears (5.1.2)
- the PDU data portion (5.2) and algorithmic issues (5.2.1.3 ff) don't seem
  to belong together.
- the values for address type are unspecified

Also, since this isn't exactly the first resource locator, a description
of where it improves upon its predecessors would be useful. Otherwise,
people may start calling it 'YARL'..
---
Henning Schulzrinne (hgs@research.att.com)
AT&T Bell Laboratories  (MH 2A-244)
600 Mountain Ave; Murray Hill, NJ 07974
phone: +1 908 582-2262; fax: +1 908 582-5809



From rem-conf-request@es.net Tue Mar 16 10:35:44 1993
To: rem-conf@es.net
Subject: spatial audio software
Date: Tue, 16 Mar 93 18:03:09 +0000
From: Jon Crowcroft <J.Crowcroft@cs.ucl.ac.uk>
Content-Length: 3193
Status: RO
X-Lines: 160


i've got a trivial program for ftp which allows you to set the volutme
on a num,ber of workstations in proportion to their
spatial/psycho-acoustic position from a "virtual listener"
and plays a given audio file or files accordingly....

for ftp from
cs.ucl.ac.uk:darpa/sax.tar.Z
(compressed tar file...)
depends on sun audio demo s/w libary....and multicast...


man page appended

comments to /dev/null:-)

you could use this in a language class to supress annoying speakers if
you changed the code to set input volume rather than output gain.
i.e. for dictatorial floor control

also depends on global knowledge of georgraphy of your servers - could
be fitted with GPS receivers and a cartesian location service, but i
dont have time or money

have fun...


---------------cut here-------------



SAX(L)			 LOCAL COMMANDS			   SAX(L)



NAME
     sax - Spatial Audio eXciter

SYNOPSIS
     saxs audiophile [configphile [multicast address]]

     saxc [configphile [multicast adddress]]

DESCRIPTION
     sax[cs] is	a client server	system that allows a single  user
     playback a	number of audio	files on a number of workstations
     and to control the	volume via a simple X window interface.

     Running saxc reads	in the file hosts, and draws an	X  window
     (transparent)_ in which you can click the mouse buttons, and
     move the mouse. The mouse position	corresponds to a place in
     space  where a listener "is".  Clicking on	the mouse buttons
     informas all the servers to start playing.	Moving it informs
     them  all of the x,y coordinate of	the "listener" (by multi-
     cast) and thus of the volume they should set.

     Typical usage:

     On	a number of machines named in the file "hosts"

     saxs music.au, where music.au is an audiophile.

     On	one of them:

     saxc

     If	you want the clients and srvers	to use	different  confi-
     guration files (see FILES), then specify them in the config-
     phile argument.  If you also want to specify  a  non-default
     multicast	address	 (i.e.	you  may be running more than one
     domain of SAXs), then give	it as the final	argument.

FILES
     hosts

     contains a	list of	hosts that may be  running  the	 service.
     The format	is simple: Each	line is	a list of space	separated
     host names, This maps onto	a "line" of hosts in the X dimen-
     sion,  spatially.	The set	of lines are mapped into parallel
     lines of hosts separated in the Y	dimension.  There  is  no
     explicit provision	for spacing the	hosts by more than a sin-
     gle unit, but you can put any old	string	as  a  host  name
     (except that each server you run must  actually appear there
     for the servers to	"know" their X,Y coordinates). SO if  you
     want non-linear audio space, use arbitrary	spacer names.





Sun Release 4.0	   Last	change:	UCL, Mar 1993			1






SAX(L)			 LOCAL COMMANDS			   SAX(L)



BUGS
     Many

     Startup volume set	by previous audio application -	this is	a
     pain. I'd like to start at	an absolute volume, but...

AUTHOR
     UM, UCL
     I.Lastname@cs.ucl.ac.uk














































Sun Release 4.0	   Last	change:	UCL, Mar 1993			2




From rem-conf-request@es.net Tue Mar 16 13:02:39 1993
Date: Tue, 16 Mar 93 14:47:57 CST
From: jim@tadpole.com (Jim Thompson)
To: rem-conf@es.net
Subject: seeking SCSI frame grabber
Content-Length: 400
Status: RO
X-Lines: 15

Hi,

I've got a macine here that won't take any kind of internal 'card', but
it does have a nice, quick (10MB/sec) SCSI channel on it, (as well as 
Sun-style (AMD79C30) audio.)  I'm looking for a way to get a digitized 
video stream into the machine, (in order to do over-the-network video
conferencing.)

Does anyone have ideas or sources for a SCSI frame grabber?

TIA,

Jim
						jim@tadpole.com


From rem-conf-request@es.net Tue Mar 16 13:56:08 1993
Date: Tue, 16 Mar 93 13:42:00 PST
From: ari@es.net (Ari Ollikainen)
To: rem-conf@es.net, jim@tadpole.com
Subject: Re: seeking SCSI frame grabber
Content-Length: 1927
Status: RO
X-Lines: 58

> Hi,
> 
> I've got a macine here that won't take any kind of internal 'card', but
> it does have a nice, quick (10MB/sec) SCSI channel on it, (as well as 
> Sun-style (AMD79C30) audio.)  I'm looking for a way to get a digitized 
> video stream into the machine, (in order to do over-the-network video
> conferencing.)
> 
> Does anyone have ideas or sources for a SCSI frame grabber?
> 
> TIA,
> 
> Jim
> 						jim@tadpole.com
 

 Here's something I found in the December '92 MacUser:

-----------------------------------------------------------------------------

SCSI Video Frame Grabber - For ANY Macintosh

New ComputerEyes/RT offers affordable, accurate real-time 24-bit color
video frame capture for any Macintosh computer (SE, Classic, LC, IIsi, 
Mac II, Quadra, etc.). Portable external SCSI device is easily moved 
from Mac to Mac.

Real-time video preview direcly on the monitor. Fast, full-screen 640X480
image grab in 1/30th second. Supports all Macintosh 8-bit and 24-bit displays.
Outputs standard TIFF and PICT files.

Also supports QuickTime for capturing video animations!

See your dealer or call (800) 346-0090 for more information and free demo
disk.

LIST PRICE - $599.95

COMPUTER EYES
Digital Vison, Inc.
270 Bridge Street
Dedham, MA 02026
(617) 329-5400

-----------------------------------------------------------------------------

I wonder if it also works on PowerBooks... A camera, a microphone, a CE/RT,
the right software, and a network connection, and, voila, you're all set 
to conference....


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ari Ollikainen    ari@es.net     National Energy Research Supercomputer Center
ESnet (Energy Sciences Network)   Lawrence Livermore National Laboratory       
510-423-5962  FAX:510-423-8744   P.O. BOX 5509, MS L-561, Livermore, CA 94550  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



From rem-conf-request@es.net Tue Mar 16 14:21:46 1993
To: rem-conf@es.net
Cc: guyc@garcia.labs.tek.com
Subject: MBONE Intern
Date: Tue, 16 Mar 93 14:04:08 PST
From: guyc@garcia.labs.tek.com
Content-Length: 594
Status: RO
X-Lines: 16

I hope this is appropriate use of the mail group but it is of potential 
interest to many of you:

I may have an opening for a summer student who is familiar with 
multicast routing, the MBONE and related software to help me install
and experiment with a local (company-wide) experimental mini-BONE. 
Can anyone point me in the direction of some schools that are particularly
active in this area?

Thanks.

--------
Email:	    guyc@strl.labs.tek.com
US Mail:    Guy Cherry, Computer Research Lab, Tektronix, Inc.  
                Box 500  MS 50-662, Beaverton OR 97077
Phone:	    503-627-1123

From rem-conf-request@es.net Wed Mar 17 01:28:12 1993
To: " (Jim Thompson)" <jim@tadpole.com>
Cc: rem-conf@es.net
Subject: Re: seeking SCSI frame grabber
Date: Wed, 17 Mar 93 09:54:37 +0100
From: Christian Huitema <Christian.Huitema@sophia.inria.fr>
Status: RO
Content-Length: 431
X-Lines: 11

Jim,

The SCSI camera is one of my pet topics. A SCSI frame grabber is probably a
large animal -- one has to do the analog to digital conversion, buffering,
etc. But one could take a CCD cell and hook a SCSI interface directly to it.
Rumor has it that Sony produces a credit card size CCD camera for a very
reasonable price; hooking a couple of pals to it should not be to hard...

Any entrepreneur around here?

Christian Huitema

From rem-conf-request@es.net Wed Mar 17 06:04:40 1993
Date: Wed, 17 Mar 93 08:40:49 EST
From: herman@sunpix.East.Sun.COM (Herman Towles - Sun NC Development Center)
To: jim@tadpole.com, Christian.Huitema@sophia.inria.fr
Subject: Re: SCSI Camera
Cc: rem-conf@es.net
Content-Length: 1978
Status: RO
X-Lines: 42


A CCD is inherently an analog device and there is considerable processing
of the output to produce the component or composite video signal. So just
connecting a SCSI interface to a CCD is not that simple.

Sony and other vendors are producing card camera modules - most are closer
to a 3x5 index card than a credit card in size. Most of these products are
entirely analog today, but the transition has begun to DSP-based solutions.

Once we are DSP-based then digital interfaces will be easier, but one must
not forget that this industry is driven by consumer product demand - namely
camcorders. And, as long as camcorders have composite analog video interfaces
then the processing chips (whether analog or digital) will support this
standard.

Regardless of all this, digital interfaced cameras for videoconferencing is
an interesting idea but we would need something faster than SCSI. Video is
sampled for decode at either:

		13.5MHz  (NTSC and PAL for CCIR-601 - non-square pixels) 
		12.26MHz (NTSC - Square pixels)
		14.75MHz (PAL - Square pixels)

Most decoders output YUV or RGB - 3 components at 8-bits minimum each - now
we need 36-45MBytes/sec bus bandwidth. Some savings can be had if we use
YUV 4:2:2 or YUV 4:1:1 video - here, the chrominance signals (U,V) are sub-
sampled horizontally with respect to the luminance, Y. e.g., 4:2:2 video as
defined in CCIR-601 and used for the DCT compression algorithms can reduce
this initial video data rate to 25-30MBytes/sec.

The video industry has a digital interface definition (CCIR-656) tied to 
their component digital video standard (CCIR-601). There are parallel and
serial versions. Another possible digital interface is the new IEEE P1394
interface being developed.

Needless to say, many folks are looking at all these options. So maybe we
will see a low cost digital interfaced color camera someday in the not so
distant future. Don't misread this, this is not insight on a future Sun 
product.


Herman Towles

From rem-conf-request@es.net Wed Mar 17 06:10:32 1993
To: herman@sunpix.east.sun.com (Herman Towles - Sun NC Development Center)
Cc: jim@tadpole.com, rem-conf@es.net
Subject: Re: SCSI Camera
Date: Wed, 17 Mar 93 15:04:11 +0100
From: Christian Huitema <Christian.Huitema@sophia.inria.fr>
Status: RO
Content-Length: 1028
X-Lines: 20

Herman,

Your computation are indeed absolutely correct, except for one detail -- there
is no point in an input device for a signal that could not be processed. This
is the very justification for the SCSI "low" bandwidth interface.

If you start from this, there is another reading to the figures. Assume that
we cannot process more than quarter PAL; then you only need half the frames
(no interlacing), and half the pixels per line. This is down to 6-8 MBytes per
second. In fact, if you look at NV or IVS, you would realize that we dont
really need 25 or 30 frame per sec *now*; 10 would probably be fine. This, we
are down to 3 - 4 MBps. Add a little control like using a scsi "ioctl" to
decide when to grab the image + switch between color and grey-scale, and you
get a low-end camera...

Indeed, these are the arbitrations of today. Should you come out tomorrow with
a super-fast 500 Mips machine, then it would probably also be equipped with
some form of super-scsi, and we would go for the real thing!

Christian Huitema

From rem-conf-request@es.net Wed Mar 17 09:46:45 1993
Date: Wed, 17 Mar 93 11:35:05 CST
From: jim@tadpole.com (Jim Thompson)
To: Christian.Huitema@sophia.inria.fr
Subject: Re: seeking SCSI frame grabber
Cc: rem-conf@es.net
Content-Length: 521
Status: RO
X-Lines: 14


> From huitema@mitsou.inria.fr Wed Mar 17 02:59:15 1993
> 
> The SCSI camera is one of my pet topics. A SCSI frame grabber is probably a
> large animal -- 

The one from Digital Vision is supposedly 3" (H) x 10" (D) x 12" (W) and weights
approx 3lbs.  I'm currently 'discussing' with DV the possiblity of getting enough
information out of them to hack together a version of NV, IVS or VAT (no source
for the last one, sigh.) to demonstrate 'proof of concept'.

Fotunately, my CEO is very interested in the project.

Jim

From rem-conf-request@es.net Wed Mar 17 23:48:47 1993
To: jim@tadpole.com (Jim Thompson)
Cc: Christian.Huitema@sophia.inria.fr, rem-conf@es.net
Subject: Re: seeking SCSI frame grabber
Date: Wed, 17 Mar 93 23:29:41 -0800
From: berc@src.dec.com
X-Mts: smtp
Content-Length: 1403
Status: RO
X-Lines: 27


In the fall of 1990 I boaught a  SCSI greyscale framegrabber for 
my DECstation 5000/200 and did some networked video expirements. 

The grabber was built by Analogics/CDA, in Peabody MA (800) 237-1011.  
It's about 7"x3"x10", and weighs five or siz lbs. with fan and power 
supply.  Internally it's composed of two modules: a frame grabber/frame 
buffer (for video out) and a SCSI interface.

I was able to capture, compress, and transmit 30fps at 160x120 and 
15fps at 320x240.  The bottlenecks were: (a) the frame grabber couldn't 
overlap (double buffer) capturing and SCSI transmission, (b) though 
it's fine for large disk transfers, SCSI is pretty inefficient and 
can induce an awful lot of CPU overhead (part of this is my fault: 
I did this before CAM, and wrote a user-space SCSI driver - every 
interrupt in the SCSI command/aquire/transfer conversation was one 
or two context switches), and (c) my compression schemes were a bit 
too computationally complex for the CPUs of that day.  I tried out 
several intrafame compression schemes, including run-length encoding, 
block truncation, and absolute moment block truncation.

I was very happy with the performance of the frame grabber, and the 
Analogic people were helpful and easy to deal with.  The experience 
with the frame grabber is what convinced us to build more hardware 
support for computer-integrated networked video.

lance

From rem-conf-request@es.net Thu Mar 18 08:59:10 1993
Date: Thu, 18 Mar 93 17:35:03 CET
From: Frank Hoffmann <HOFFMANN%DHDIBM1.BITNET@vm.gmd.de>
Subject: RTP
To: rem-conf@es.net
Content-Length: 2983
Status: RO
X-Lines: 67


 We have some comments on the RTP Internet Draft (December version)

 1. Checksum Handling

    The RTP packet format (Figure 1 in protocol specification) does not
    contain a checksum field. Since not all media encodings are able
    to correct bit errors themselves, it makes it impossible to
    implement the reliability classes described in section 3.10 (main
    document) when running RTP on top of protocols that provide
    unreliable data service, like ST2.

    Thus we propose that separate checksums for header and data
    are introduced.

 2. Reliability Classes

    From the RTP specification it is not clear how reliability
    class #4 ('correct', section 3.10) could be implemented.
    In the previous sections it is mentioned that traditional
    retransmission is not desirable for real time communication.
    Perhaps it should be indicated how FEC or similar mechanisms
    could achieve the same functionality for the 'correct' class.

 3. RTP Gateways

    RTP Gateways are entities that can connect two or more underlying
    'connections' of an arbitrary transport system (TS).


    RTP SENDER           RTP GW1          RTP TARGET
     !                   !     !                 !
    TS------CONN 1-------  TS   -----CONN 2-----TS


    The question is, which identifier is used by RTP to forward
    incoming data packets of CONN-1 to an outgoing TS-connection.
    In the RTP description we found elements like flow-ID
    and a BYE RTCP message, which could be used to establish and remove
    an RTP flow from a sender over different RTP gateways
    to the final targets. But we miss some header information
    which allows the RTP gateways to map the Flow-ID to targets.

    One solution could be to transmit the destination address
    as optional header field in the first data packet of a flow.
    The sender should be able send the BYE message, too, if it wishes
    to delete a data flow.


 Frank Hoffmann
 Luca Delgrossi


 #====================================================================#
 " Frank Hoffmann, Luca Delgrossi                                     "
 " IBM European Networking Center                                     "
 " Vangerowstr. 18                                                    "
 " D6900 Heidelberg 1                                                 "
 " Germany                                                            "
 #====================================================================#
 " TEL: +49-6221-594330                                               "
 " FAX: +49-6221-593300                                               "
 #====================================================================#
 " E-mail:                                                            "
 "          hoffmann@dhdibm1.bitnet                                   "
 "          luca@dhdibm1.bitnet                                       "
 "                                                                    "

From rem-conf-request@es.net Thu Mar 18 10:33:29 1993
From: Fengmin Gong <gong@concert.net>
Subject: Re: Sample RTP header file
To: frederic@parc.xerox.com (Ron Frederick)
Date: Thu, 18 Mar 1993 13:24:56 -0500 (EST)
Cc: rem-conf@es.net
X-Mailer: ELM [version 2.4 PL20]
Content-Type: text
Content-Length: 1479
Status: RO
X-Lines: 43

Ron Frederick wrote in a previous message:
>
>Hello everyone...
>
>I have been working on making my packet video program 'nv' use the
>evolving RTP transport protocol, and I put together a header file which is
>based on the latest version of the RTP Internet Draft I found at UMass.
>Since I haven't seen anything like this online yet, I thought it made sense
>to send it out to the list... Corrections, additions, or other comments are
>welcome. It would be nice if we could come up with a single header file
>that everyone could use when writing RTP applications.
>
>Note that as I have written it, the header file probably won't work right
>on little-endian machines. That'll probably need to be dealt with in the
>final version...
>

Everything looks just fine except that for RTCP CDESC header, the RTP draft
specified an 8 bit clock quality field, not 16 bit as defined in header
file as highlighted below:

>
>/* RTCP CDESC option header */
>struct rtcpcdeschdr {
>	u_char	rtch_fin:1;	/* final option flag */
>	u_char	rtch_type:7;	/* option type */
>	u_char	rtch_optlen;	/* option len */
>	u_char	rtch_x1:2;	/* reserved (must be 0) */
>	u_char	rtch_content:6;	/* content id */
>	u_char	rtch_x2;	/* reserved (must be 0) */
>	u_short	rtch_rport;	/* return port */

Should this be "u_char rtch_cqual;"?
>	u_short	rtch_cqual;	/* clock quality */

>	u_short	rtch_x3;	/* reserved (must be 0) */
>	u_long	rtch_cdesc;	/* content descriptor */
>};


Fengmin Gong
gong@concert.net


From rem-conf-request@es.net Thu Mar 18 11:30:23 1993
Date: Thu, 18 Mar 1993 11:10:37 PST
Sender: Ron Frederick <frederic@parc.xerox.com>
From: Ron Frederick <frederic@parc.xerox.com>
To: Fengmin Gong <gong@concert.net>
Subject: Re: Sample RTP header file
Cc: rem-conf@es.net
Content-Length: 415
Status: RO
X-Lines: 18

Fengmin Gong writes:

> Should this be "u_char rtch_cqual;"?
>>	u_short	rtch_cqual;	/* clock quality */

Yes - thanks.. Also, that means the following field should be changed
from:
>	u_short	rtch_x3;	/* reserved (must be 0) */

to:
	u_char	rtch_x3;	/* reserved (must be 0) */

I also recommend adding a version number define to the top of the file:

#define RTP_VERSION		1
--
Ron Frederick
frederick@parc.xerox.com

From rem-conf-request@es.net Thu Mar 18 18:49:58 1993
Date: Thu, 18 Mar 93 18:37:34 PST
From: ari@es.net (Ari Ollikainen)
To: rem-conf@es.net
Subject: Common desktop agreement
Status: RO
Content-Length: 1641
X-Lines: 39


The annoucement by HP, IBM, SCO, SunSoft, Univel and USL at UNIFORUM
of a Common Open Software Environment (COSE) includes :

"...	HP, IBM, SCO, SunSoft, the software subsidiary of Sun
Microsystems, Inc., Univel and USL have defined a specification for a
common desktop environment that gives end users a consistent look and
feel. They have defined a consistent set of application programming
interfaces (APIs) for the desktop that will run across all of their
systems, opening up a larger opportunity for software developers. The
six companies have each decided to adopt common networking products,
allowing for increased interoperability across heterogeneous
computers.  In addition, they have endorsed specifications, standards
and technologies in the areas of graphics, multimedia and object
technology...

				...


Multimedia

	The six companies will submit a joint specification for the
Interactive Multimedia Association's (IMA) request for technology. This
will provide users with consistent access to multimedia tools in
heterogeneous environments and enable developers to create
next-generation applications using media as data.

...."

Anyone from any of these organizations able to comment on the specification
for IMA's RFT??


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ari Ollikainen    ari@es.net     National Energy Research Supercomputer Center
ESnet (Energy Sciences Network)   Lawrence Livermore National Laboratory       
510-423-5962  FAX:510-423-8744   P.O. BOX 5509, MS L-561, Livermore, CA 94550  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


From rem-conf-request@es.net Fri Mar 19 11:37:24 1993
From: steve@ibmpa.awdpa.ibm.com (Steve DeJarnett)
Subject: Re: Common desktop agreement
To: ari@es.net (Ari Ollikainen)
Date: Fri, 19 Mar 1993 10:54:36 -0800 (PST)
Cc: rem-conf@es.net
X-Mailer: ELM [version 2.4 PL13]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 1982
Status: RO
X-Lines: 45

Ari Ollikainen wrote:
>The annoucement by HP, IBM, SCO, SunSoft, Univel and USL at UNIFORUM
>of a Common Open Software Environment (COSE) includes :
>				...
>
>Multimedia
>
>	The six companies will submit a joint specification for the
>Interactive Multimedia Association's (IMA) request for technology. This
>will provide users with consistent access to multimedia tools in
>heterogeneous environments and enable developers to create
>next-generation applications using media as data.
>
>...."

	The companies are jointly responding to the IMA's RFT for Multimedia
System Services.  This basically specifies what services will be provided by
the system vendors that application writers can utilize in building multimedia
applications.

	The Multimedia System Services RFT was jointly authored by people from
DEC, HP, IBM, and Sun.  HP, IBM, SCO, SunSoft, Univel, and USL are working on a
joint response to the IMA RFT based on technology from HP, IBM, and Sun.  If 
the response is selected as the IMA's "Recommended Practice" for Multimedia 
System Services, the companies will be producing implementations based on the
response.

	The other IMA RFTs -- "Multimedia Data Exchange" and "Scripting 
Language for Interactive Multimedia Titles" -- are not directly a part of the
joint announcement, although any or all of the companies may be preparing
responses to those RFTs.

>Anyone from any of these organizations able to comment on the specification
>for IMA's RFT??

	I'm not sure exactly what you're asking for comments on.  Do you want
to know more about the IMA's RFT, or the companies' joint response to the IMA??
Specify in more detail and I'll see what I can come up with in response.  I
hate giving "non-answers"...

----
Steve DeJarnett			Internet:  steve@ibmpa.awdpa.ibm.com
IBM PS Multimedia Mountain View IBM IPNET: steve@ibmpa.awdpa.ibm.com
(415) 694-3896			IBM VNET: dejarnet at almaden (only if you must)
These opinions are my own.  I doubt IBM wants them.......

From rem-conf-request@es.net Fri Mar 19 12:11:46 1993
Date: Fri, 19 Mar 93 13:57:32 CST
From: jim@tadpole.com (Jim Thompson)
To: berc@src.dec.com
Subject: Re: seeking SCSI frame grabber
Cc: Christian.Huitema@sophia.inria.fr, rem-conf@es.net
Content-Length: 603
Status: RO
X-Lines: 15


> From berc@src.dec.com Thu Mar 18 01:30:05 1993
> 
> In the fall of 1990 I boaught a  SCSI greyscale framegrabber for 
> my DECstation 5000/200 and did some networked video expirements. 
> 
> The grabber was built by Analogics/CDA, in Peabody MA (800) 237-1011.  
> It's about 7"x3"x10", and weighs five or siz lbs. with fan and power 
> supply.  Internally it's composed of two modules: a frame grabber/frame 
> buffer (for video out) and a SCSI interface.

Analogic/CDA's number is (now) +1 508 977 3000.  They are about 2 months
from production of a Color version.  They seem helpful thusfar.

Jim

From rem-conf-request@es.net Sun Mar 21 00:18:09 1993
Date: Sun, 21 Mar 93 10:46:04 +1000
From: bob@cs.su.oz.au (Bob Kummerfeld)
To: rem-conf@es.net
Subject: IETF
Status: RO
Content-Length: 109
X-Lines: 4

Is there any plan to audio/video-cast sessions from the IETF meeting in
Columbus? If so, what sessions?

Bob

From rem-conf-request@es.net Mon Mar 22 02:00:40 1993
Posted-Date: Mon 22 Mar 93 01:43:02-PST
Date: Mon 22 Mar 93 01:43:02-PST
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Draft agenda for AVT WG
To: rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@MMC.ISI.EDU>
Content-Length: 2445
Status: RO
X-Lines: 59

To the Audio/Video Transport Working Group:

I am pleased to see the recent Internet Draft from Thierry and
Christian on Packetization of H.261 video in RTP, and also to hear
that RTP has been implemented in NEVOT and nv.

At the November meeting, we resolved a number of issues which Henning
subsequently detailed in the set of Internet Drafts issued in
December.  A few issues remain to be resolved, for example the
question of the format of source identifiers as discussed recently on
this list.  I plan to send an outline of the issues and proposed
solutions to this list in the next couple of days, then to lead a
discussion of those issues during the Tuesday afternoon sessions.
(Unfortunately, Henning won't be able to join us because IETF and
Infocom overlap.)

Again at this meeting, I propose that the Wednesday session be
dedicated to a discussion of "implementers agreements" for
interoperation in the applications that will use RTP.  I know that Ron
Frederick will be at this meeting, and I hope the implementers of
other "packages" will be there (or remotely partipating), too.

It is my expection that the AVT WG will NOT meet in Amsterdam.  I
believe we'll be ready after this meeting to finish the protocol spec
and issue it as an "Experimental" RFC to foster wider implementation,
testing and use.  I believe the main protocol spec is quite close
already, though we have to learn more about how profiles should be
defined, including port assignment.  So, it may be appropriate to go
on hiatus and re-convene as needed.  We can discuss this, too, in
Columbus.  Comments?
						-- Steve Casner


		       Audio/Video Transport WG
				   
		      D R A F T     A G E N D A

Tuesday, March 30, 1:30-3:30 and 4:00-6:00

  - Quick review of the draft protocol specification

  - Discuss open issues and proposed solutions, seeking the
    traditional "rough consensus" on the protocol specification

  - Assess what further WG efforts are needed


Wednesday, March 31, 9:30-12:00:  "Implementors Session"

  - Brief presentations on new implementations on top of RTP,
    if the implementers are ready to do so.

  - Last time we talked about API's for the coding routines, and the
    claim was made that is was too early.  Is that still true?

  - Separate from the transport protocol, what protocols and/or
    implementation agreements might be layered on top to achieve
    interoperation for near-term experimentation?
-------

From rem-conf-request@es.net Mon Mar 22 02:59:49 1993
Posted-Date: Mon 22 Mar 93 02:43:22-PST
Date: Mon 22 Mar 93 02:43:22-PST
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Re: RTP
To: HOFFMANN%DHDIBM1.BITNET@vm.gmd.de, rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@MMC.ISI.EDU>
Content-Length: 2490
Status: RO
X-Lines: 50

Frank Hoffmann and Luca Delgrossi:

Thanks for your comments on the RTP draft.  Some replies:

 1. Checksum Handling

    During previous work on ST-2, I have assumed that it would be
    desirable to define a service equivalent to that of UDP over IP
    but with the resource reservation that ST-2 can provide.  Since
    ST-2 already provides a length field, and the port numbers are
    established during connection setup, all that remains for this
    UDP' over ST-2 is the checksum.  As for UDP over IP, the NextPcol
    field in the ST-2 Origin parameter would indicate UDP' (whether
    this should be the same code as UDP is TBD), and the port numbers
    would indicate the protocol above UDP' (as is the case for UDP).

    This may be a good or bad idea, and I would be interested in
    arguments either way.  I believe the good is in providing
    commonality with IP, and in providing the checksum service in a
    common form to a variety of applications.  The bad, compared to a
    checksum in RTP, might be a more constrained functionality.

 2. Reliability Classes

    In general, I believe the real-time applications will not want to
    do retransmission, or at least not as a generic service.  However,
    the RAD option provides a means to request retransmission in an
    application-specific manner.  See the new draft from INRIA
    (draft-ietf-avt-video-packet-00.txt) for an example.

 3. RTP Gateways

    I don't think I fully understood your third point.  The RTP header
    information allow identification of a specific flow from a
    specific source, but does not identify targets (destinations).  If
    the TS is IP Multicast, then the source never identifies the
    specific destinations.  If the TS is ST-2, then the targets are
    specified during the connection setup.  The mapping of a session
    on one side of an "RTP-level gateway" to a session on another side
    may really be considered to be done at the application level.  For
    example, packets arriving for a particular IP Multicast session
    would have a specific destination IP Multicast address and port.
    When the gateway is being set up, a mapping could be defined from
    that session to an ST-2 connection to a specified set of targets.
    Nothing in the RTP header identifies those targets.  Either a
    manual procedure or some higher-level control protocol is likely
    to be involved in setting up the session mapping in the gateway.

							-- Steve
-------

From rem-conf-request@es.net Mon Mar 22 08:53:19 1993
Date: Mon, 22 Mar 93 11:35:03 EST
From: broscius@thumper.bellcore.com (Al Broscius)
To: rem-conf@es.net
Subject: Port of vat to Solaris available ??
Content-Length: 111
Status: RO
X-Lines: 6


Does anyone know of the existence of a port of vat/sd/etc. to Solaris for
the Sparc LX machines ?

Thanks
-al

From rem-conf-request@es.net Tue Mar 23 01:11:28 1993
Date: Tue, 23 Mar 1993 00:20:31 -0800
From: schooler@ISI.EDU
Posted-Date: Tue, 23 Mar 1993 00:20:31 -0800
To: rem-conf@es.net
Subject: Re: Template and bibliography for confctrl BOF
Cc: schooler@ISI.EDU, confctrl@ISI.EDU
Content-Length: 9985
Status: RO
X-Lines: 271

To foster wider discussion, I am forwarding to rem-conf some ideas
that already have been posted to the confctrl mailing list.

The first session of the Conference Control BOF at the upcoming IETF
will be used for several presentations on different confctrl schemes.
The emphasis will be on fleshing out design assumptions, tradeoffs,
complexity, scalability etc.

Below is a draft template to be used as a guideline for these 
presentations.  Answers to the template are intended to help define what 
confctrl is and to help understand the functional requirements of a generic 
confctrl protocol.  I would like us to end up with a healthy cross
section of confctrl approaches, plus specifics on design choices.

Please comment on the template.  Are there questions that should be 
reworded, added, removed?   

If you have implemented an existing teleconferencing system, 
application or protocol, I would especially appreciate it if you
would fill in the template (or your improved version of the template);  
short of that, please suggest a couple papers that capture the essence 
of your work -- as it relates to conference control.

I am also interested in identifying projects that have been
influential or seminal in this area.  Please forward me any ideas you
might have, otherwise you will end up with my own biased opinions :-)
On a related note, I have started a bibliography/recommended reading
list on confctrl, so I am interested in references to your favorite
readings.

Eve


~~~~~~~~~~~~~~~~~~~


		    Conference Control BOF Template
		    -------------------------------

1. Name of project, program and/or protocol.

2. Contact person, affiliation and e-mail address.

3. Target operating environment and key design considerations:

   - WAN vs LAN 
   - digital vs analog media
   - the kinds of collaborative media used in your system 
     (e.g., real-time audio, video, animations, landsat images) 
   - packet technology vs ISDN
   - room-to-room vs desktop conferencing
   - ???

4.a Type of conference styles supported by your system/protocol.

4.b Profile of user community: 

   - expertise level 
   - formality of meetings
   - demand for quality of service
   - mechanisms for scheduling/reservation of system

5. Architecture assumptions: 

   - distributed vs centralized model vs hierarchical
   - system component(s) responsible for conference control
   - degree of homogeneity in end-system capabilities 
   - multicast integration
   - directory services
   - support for quality of service
   - open vs closed membership (e.g., only preregistered users)
   - ???

6.a How do you define conference control?  

6.b Conference control functionality supported.

6.c Other control functions you would like to support.

7. Conference control protocol details: 

  - explicit vs implicit setup
  - interconnectivity of participants
  - state sharing
  - robustness
  - scaling properties
  - ???

8. Hardware/software platforms.  

9.a What specifically has or has not been implemented?

9.b What was unexpectedly easy or difficult to implement?

9.c What might you change as a result?

10. If available, suggested readings about your work on confctrl.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Example ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
		

1. Name of project, program and/or protocol.

	Multimedia Conference Control program (MMCC),
	Conference Control Protocol (CCP)

2. Contact person, affiliation and e-mail address.

	Eve Schooler, USC/Information Sciences Institute, 
	Multimedia Conferencing Project,
	schooler@isi.edu

3. Target operating environment and key design considerations:

	Focus has been on WAN teleconferencing.  All media packetized; 
	couples digital packet audio and video with shared workspaces.  
	Originally used in room-to-room conferencing over DSInet, more 
	recently re-oriented for personal desktop conferencing across 
	DARTnet and MBONE.  Operational testbeds, thus high premium 
	on reliability and fault tolerance.  

4.a Type of conference styles supported by your system/protocol.

	Supports small-sized conferences (ideal for 2-5 members, but
	the control model probably suitable for 10's of members).

4.b Profile of user community: 

	Novice user community over DSInet; conference operators
	typically orchestrate sessions via remote control.  Tends toward
	more formal meetings, ever since became more production-
	oriented.  Concern about system robustness and quality (resolution) 
	of media.  Probably some concern over openness or security 
	of system.

	DARTnet and MBONE comprised of users who are researchers and 
	willing to use software in a more impromptu fashion.  Extremely
	informal usage.  Happy to have access to whatever software and 
	bandwidth available.

5. Architecture assumptions: 

	Distributed, peer-to-peer model.  A single connection manager
	or conference control element resides at each end system.  It
	is responsible for a particular user, at a particular machine
	and port number.  It interacts with underlying media agents
	that handle the specifics for the various media types (e.g.,
	realtime data processing).  The peer connection managers use the
	CCP to communicate conference-related requests to each other.

	Heterogeneous end-system capabilities in terms of encoding
	schemes and data rates.  All end systems' must match
	at conference setup.  Connection manager has the task of
	synchronizing configurations to initiator's request.

	Currently the control protocol relies on a group interface
	in order to be able send to multiple participants at once; 
	it is not tied to the use of IP multicast.  The control protocol
	is used however to share multicast addresses of underlying tools
	with peer connection managers.

	No global directory services available.  This has led to
	well-contained naming, where each site maintains a static
	configuration file of its favorite remote sites and their
	addresses.  More recently, allowing on-the-fly additions to 
	each site's local cache of user aliases.

	Depending on the testbed and mode of software, bandwidth 
	reservations are provided by ST, or not at all when using
	IP-Multicast.  Looking to experiment with other QoS schemes 
	in the near term.

6.a How do you define conference control?  

	The managment and coordination of multiple sessions, and
	their multiple users in muliple media.  

6.b Conference control functionality supported.

	- session:
		- pre-scheduling, status, exclusive sessions
	- membership: connect, invite, join, disconnect
	- configuration: (only at set up) 
		- set of media (or underlying tools) to include
		- parameters for each media

	- control-related:
		- remote camera switching
		- video "floor selection"
			- sender selected (for limited bandwidth scenarios)
			- receiver selected (for personal preferences)
		- "autopilot", where one site can auto control another site

6.c Other control functions you would like to support.

	- modification of media set and parameters of ongoing sessions
	- session archiving
	- include combination nodes, or reflectors, in the control path
	  to perform encoding translations, mixing functions (4-to-1),
	  etc.
	- interface for security and qos preferences
	- merge conferences
	- side chats
	
7. Conference control protocol details: 

	Tightly-controlled session model, entailing explicit setup
	and tear down.  By default, there is nway connectivity among
	participants.  Each participant stores state information such as the
	membership list and each member's current state (involved, 
	used-to-be-involved).  All participants are appraised of 
	conference state at initiation and when any new members join 
	or old members leave.  Can opt to maintain global state more 
	rigidly through stricter synchronization methods, if desired.  

	CCP is sensitive to WAN operation; it attempts to provide
	reliable messaging among peers and is able to repair state 
	information in the event of temporary network outages.

	Scale is an issue.  Membership changes (at setup time and
	throughout the session) result in 1-to-n communication.
	Rigorous global state maintenance requires n-to-n communication.

8. Hardware/software platforms.  

	Sunview and X-based GUIs.  Moving toward Tk/tcl.

	UNIX-oriented system code and libraries; currently runs on 
	Sun workstations.

	CCP is built on top of a reliable, group messaging service that
	in turn uses UDP sockets.  Expect to replace the group messaging
	service with other schemes (e.g., ISIS) as they become more readily 
	accepted or are in widespread use.

	RPC-based control of servers designed for local hardware control 
	(crossbar switches, video codecs, cameras, monitors).

9.a What specifically has or has not been implemented?

	The MMCC program, the GUI to the system, lags behind the
	capabilities of the CCP protocol.  Now working on GUI support for
	multiple sessions at once, more formalized interface between
	connection manager and media agents, and additional configuration 
	language detail for capability choices.

9.b What was unexpectedly easy or difficult to implement?

	Maintaining rigid global state and resynchronization are
	difficult.  The CCP spec calls for a variety of policies, only
	some of which are reflected in in the GUI of MMCC.  Too much
	flexibility in the protocol leads to complicated state
	transitions.  From an implementor's point of view, the original
	CCP spec was too complicated; as a consequence only the most
	common functions have been realized in the latest version of
	code. The follow-on spec needs to be simpler, and should also
	distinguish between recommendations about core functionality and 
	provisions for special services.  Separation of confctrl into smaller
	units made problem clearer (session management vs membership
	management vs configuration management).

9.c What might you change as a result?

	Begin with scaling as design goal.
	
10. Papers.

	FTP venera.isi.edu:pub/hpcc-papers/mmc/README.txt for details
	on further readings.
	

From rem-conf-request@es.net Tue Mar 23 01:11:40 1993
Date: Mon, 22 Mar 93 18:59:38 PST
From: ari@es.net (Ari Ollikainen)
To: rem-conf@es.net
Subject: Video clips ...
Content-Length: 10505
Status: RO
X-Lines: 224


Some relevant information from the February 1993 Internet Monthly Report, 
posted as a public service for those who might not otherwise get, read, 
or notice...

BOLT BERANEK AND NEWMAN INC.
----------------------------

     ...

     Real-time Multicast Communications and Applications

     On February 9th, we demonstrated the use of the Video Information
     Server (VIS) over a wide area network at DARPA.  The set of
     machines that comprises the VIS was located at BBN and a client
     machine running video applications was set up at DARPA for the
     demo.  The machines were located on local Ethernets (at each site)
     and these were in turn connected to the DARTNET which supported the
     wide area communications.

     The video server machines are Sun workstations.  These machines
     control a variety of video devices including video disc players and
     analog video switches.  One of the server machines contains a
     Parallax board which is used to convert analog video to digital
     video for transmission over the wide area net. The video server
     also contains a WAIS database of video information, obtained by
     recording and decoding closed captioned video and indexing the
     closed captioned text to the video.  Users search the WAIS database
     to select video of interest.

     The client machine used in the demo was a Sun workstation with a
     Parallax board which is used to receive and decode digital video
     and display the video in a window on the workstation.  All of the
     video control, i.e. searching the video database for the desired
     video clip, locating an appropriate video device to play the clip,
     and controlling the video device (i.e. forward, pause, play, etc.),
     was done remotely over the wide area network.  There were no local
     video devices.  The video was delivered digitally over the wide
     area network.

     This demo showed several new features of the Video Information
     Server.  The original VIS was designed to work in a local area.
     Not only was video delivered via analog lines, but many of the
     control mechanisms relied on features only available in a local
     area network, for example, local file sharing as a method for
     obtaining information about video clips.  The demo showed not only
     the digital delivery of video, but also the use of new mechanisms
     for video searching and control of video devices, which enable
     these functions to be done on a wide area network.  We have now
     demonstrated the use of the Video Information Server in a wide area
     network, and will continue to work on this system to improve
     reliability and add new features.

     In parallel with the VIS efforts, we have completed implementation
     of "anycasting" service and a version of "multi-level flows".

     o Anycasting is a concept that allows an application to address a
       replicated object and find the nearest/best one.  This feature
       can make it possible to place relatively static information about
       servers and services in regional service directories distributed
       throughout the Internet, while allowing the choice of a
       particular server (of an indefinite and dynamically changing
       group of servers) to be made by the network according to network
       and client conditions.

     o Multi-level data flows are a special case of resource
       coordination in which a group of information flows forms a whole:
       for example, different levels of video resolution.  Network
       support for multi-level data flows can be used to permit a
       recipient to specify what part of the total data flow should be
       conveyed, when the data has been separated into, say, high,
       medium, and low resolution components.  This is useful in
       extending applications such as video conferencing into
       environments where some of the sites may be connected by low
       speed links.  The source sends at full rate to the multicast
       address, but only the low resolution data would be delivered to
       the disadvantaged sites.

     In parallel with the VIS efforts, we have completed implementation
     of "anycasting" service and a version of "multi-level flows".

     o Anycasting is a concept that allows an application to address a
       replicated object and find the nearest/best one.  This feature
       can make it possible to place relatively static information about
       servers and services in regional service directories distributed
       throughout the Internet, while allowing the choice of a
       particular server (of an indefinite and dynamically changing
       group of servers) to be made by the network according to network
       and client conditions.

     o Multi-level data flows are a special case of resource
       coordination in which a group of information flows forms a whole:
       for example, different levels of video resolution.  Network
       support for multi-level data flows can be used to permit a
       recipient to specify what part of the total data flow should be
       conveyed, when the data has been separated into, say, high,
       medium, and low resolution components.  This is useful in
       extending applications such as video conferencing into
       environments where some of the sites may be connected by low
       speed links.  The source sends at full rate to the multicast
       address, but only the low resolution data would be delivered to

     Karen Seo <kseo@BBN.COM>
		
			[  . . . ]

ISI
---
     ...

     MULTIMEDIA CONFERENCING

     This month we extended the teleconferencing facilities at DARPA and
     ISI by interconnecting the wide-area packet teleconferencing
     systems, which use DARTnet and DSInet, with the ZAPT local-area
     desktop conferencing system.  The ZAPT system, installed both at
     DARPA and ISI, uses analog audio and video with NeXT workstations
     running a custom extension of Bellcore's Touring Machine software,
     and provides a local analog distribution and teleconferencing
     capability.

     MMCC, the multimedia conference control program, allows users to
     select among different codecs for the different wide-area systems,
     and among dedicated conference rooms or the interconnect to the
     desktop system.  To avoid conflicts created by multiple
     conferencing systems needing access to shared hardware (e.g., MMCC
     and ZAPT controlling the crossbar switch, echo canceller, video
     codecs), we integrated resource registration into our device
     servers.  Now, a client application can use the server to reserve
     access to the hardware and prevent another teleconferencing
     application from stealing it.  The resource reservation is also
     fault tolerant, so that owner failure releases the device.

     Routines for software decoding of the video data stream produced by
     a Bolter/Concept codec have been integrated into the popular "nv"
     video tool with much help from Ron Frederick at Xerox PARC. A number
     of improvements were made in the decoding as well, including
     interpolation of the data in low-resolution mode.  We are working on
     arrangements to allow release of the decode routines in binary form.

     The paper, "Case Study: Multimedia Conference Control in a Packet-
     switched Teleconferencing System", was completed this month, and
     will appear in the Journal of Internetworking.

     Steve Casner, Eve Schooler, Joe Touch
     (casner@isi.edu, schooler@isi.edu, touch@isi.edu)


MERIT/MICHNET
-------------
     ...

     6. IETF Connectivity

     Merit and ANS have been working with the OARnet staff to ensure
     that good connectivity will be provided to the IETF in Columbus
     later this month. ANS will be adding a second T1 circuit for OARnet
     to the backbone, which will be used mainly for the mbone
     audio/video multicast transmission. Merit has re-engineered its
     Michigan mbone tunnel connections to improve connectivity for
     MichNet as well as for OARnet. ANS will provide an EON RT system
     for encapsulation of OSI CLNP datagrams, in order to allow
     demonstrations of TUBA software.

     ...

 Mark Knopper (mak@merit.edu)



PREPNET
-------
     ...

     The Pittsburgh SMDS demo officially began in November and ended
     January 31.  Participants included PREPnet, Shadyside Hospital,
     Carnegie Mellon University, the Pittsburgh Supercomputing Center,
     and IBM's Industrial Technology Center.  The participants tested
     traffic matrices and the use of SMDS for Internet access and packet
     video applications.  The PSC gathered statistics to evaluate
     throughput of the SMDS link.

     The Bell Atlantic booth at Interop East will include a connection
     to the Pittsburgh SMDS cloud in order to demonstrate packet video,
     file transfer, and general Internet access via SMDS.

     PREPnet NIC (prepnet+@andrew.cmu.edu)


UCL
----

     Working with Thierry Turletti, INRIA, we now have full interworking
     h.261 software tested real against GPT and other H.261 hardware
     codecs. In particular, we can source video from a codec, filter off
     the H.221 fgraming, packetize and multicast over INRIAs protocol
     over IP multicast, and decompress and receive under X in pure
     software quite conveniently.

     WE are now trying to reconnect the UK to the MBone so we can run
     this multicast internationally (for the Internet and MICE).

     2 papers were submitted to conferences, on CBT Multicast and on a
     Control-theoretic analysis and design of a video transport
     protocol.

     A note on the (ironic) unsuitability of RPC for building
     distributed programs (in particular, conference control systems)
     was distributed, and will be submitted to a suitable place after
     comments. It can be ftp-d from cs.ucl.ac.uk, in darpa/conf-rpc.ps.Z
     ( unix compressed, postscript)

     John Crowcroft (j.crowcroft@CS.UCL.AC.UK)


			[  . . . ]


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ari Ollikainen    ari@es.net     National Energy Research Supercomputer Center
ESnet (Energy Sciences Network)   Lawrence Livermore National Laboratory       
510-423-5962  FAX:510-423-8744   P.O. BOX 5509, MS L-561, Livermore, CA 94550  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


From rem-conf-request@es.net Tue Mar 23 01:15:45 1993
Date: Tue, 23 Mar 1993 00:27:32 -0800
From: schooler@ISI.EDU
Posted-Date: Tue, 23 Mar 1993 00:27:32 -0800
To: rem-conf@es.net
Subject: Re: confctrl BOF template
Cc: confctrl@ISI.EDU, schooler@ISI.EDU, D.Lewis@cs.ucl.ac.uk
Content-Length: 6269
Status: RO
X-Lines: 196


----- Begin Included Message -----

>From D.Lewis@cs.ucl.ac.uk Mon Mar 22 07:44:12 1993
To: schooler@ISI.EDU
Cc: mm-local@cs.ucl.ac.uk
Subject: 
Date: Mon, 22 Mar 93 15:42:28 +0000
From: D.Lewis@cs.ucl.ac.uk


		    Conference Control BOF Template
		    -------------------------------

1. Name of project, program and/or protocol.

	PREPARE (Prepilot in Advanced Resource Management - R2004)
	Part of the Commission of the European Communities RACE II program

2. Contact person, affiliation and e-mail address.

	David Lewis
	Computer Science Department
	University College London
	Gower Street
	London WC1E 6BT
	U.K.

	e-mail: dlewis@cs.ucl.ac.uk
	tel: +44 (0)71 387 7050 x 3706
	fax: +44 (0)71 387 1397

3. Target operating environment and key design considerations:

   - WAN vs LAN 
	the project is developing its own broadband testbed network
	consisting of:
	- a 3-node 155Mbps ATM WAN network (in cooperation with the 
	  Danish BATMAN project)
	- a MAN network
	- Token Ring LANs
	- ATM PBXs

   - digital vs analog media
	all digital audio and video using the following equipment and 
	techniques:
	- SPARCstation inbuilt audio
	- VideoPix card in SPARCstations
	- GEC px64kbps H.261 codecs
	- software H.261 coding/decoding (IVS from INRIA, France)

   - the kinds of collaborative media used in your system 
	The conferenceing system used is a development of the conferencing
	system developed at UCL for the RACE I project CAR. This allows
	the inclusion of both custom built shared applications and standard,
	X-window based applications into conferences. Audio and video 
	conferencing application (e.g. VAT, IVS) are included in this way.

   - packet technology vs ISDN
	Both. IP used for workstations connected to Token Rings, and B-ISDN
	in the form of ATM AAL1 connections are used for workstations 
	connected ATM PBXs 

   - room-to-room vs desktop conferencing
   	desktop, workstation-based conferencing used

4.a Type of conference styles supported by your system/protocol.

4.b Profile of user community: 

   - expertise level
	the conferencing application is used only for the demonstration of
	network management techniques, which is the aim of the project. THus
	the conferencing system will not be used "in service" as part of the 
	project, but this is planned for both the depatment at UCL and the
	CEC MICE project, using the same system. 

   - formality of meetings
	The demo's in PREPARE are based on groups of engineers working closely
	together using shared CAD packages. Floor control is "free for all".

   - demand for quality of service
	Floor control will be request by the user in qualitative term through
	the conferencing application. The aim in PREPARE is that the 
	QoS is then supplied by the network management system.


5. Architecture assumptions: 

   - distributed vs centralized model
	The software model is essentially centralised, relying on a single
	conference server. However there is no central hub or video 
	distribution site, so for the purposes of demonstrating interaction
	with the network management, the conferencing system is distributed.
	We are working generally towards making the software truely 
	distributed. 

   - system component(s) responsible for conference control
	The central conference server process is responsible for conference
	control. Each particpant runs a conference control application that
	provides a user interface to the facilities offered by the server.

   - degree of homogeneity in end-system capabilities 
	currently the system runs only on SPARCstations, however use of
	H.261 encoding allows interworking between workstation with either
	hardware or software CODECs

   - multicast integration
	multicast used for audio and video transmission, though  mulitcast
	may not be implemented in all parts of the PREPARE tesbed.

   - directory services
	X.500 directory services will be available through network management
	services

   - support for quality of service
	implemented through network management services

6.a How do you define conference control?  
	by the following functions:
		- creating a conference
		- joining a conference
		- leaving a conference
		- browsing existing conferences
		- including an application in a conference
		- removing an application from a conference
		- requesting and receiving the floor

6.b Conference control functionality supported.
	see 6.a

6.c Other control functions you would like to support.
	- floor control by chairperson
	- separate floor control for different media

7. Conference control protocol details: 

  - explicit vs implicit setup
	explicit set-up by user by named participants and included 
	application only
  - interconnectivity of participants
	through conference server
  - state sharing
	ditto
  - robustness
	conference server obvious weak point
  - scaling properties
	limited by capabilities of machine running conference server

8. Hardware/software platforms.  
	SPARCstation, 
	SunOS,
	requires IP multicast and SUN RCP or ANSA RCP

9.a What specifically has or has not been implemented?
	All conference control mentioned is implemented and well tested
	over local and wide area Internet.
	Interface to network management, QoS control and testing over
	PREPARE testbed not yet performed

9.b What was unexpectedly easy or difficult to implement?
	Use of ANSA for RCP mechanism was found to weigh down the rest
	of the system, hence change to SUN RCP

9.c What might you change as a result?
	We hope to evolve the system to be fully distributed, with
	conference control communication over multicast

10. If available, a couple of suggested readings about your work
    on confctrl.

The following are available anonymous ftp from uk.ac.ucl.cs in directory
car:

collab_write.ps.Z
  Multimedia Conferencing as a Tool for Collaborative Writing: A Case Study
    - S. Baydere et al, 1991

complexity.ps.Z
  Coping with complexity and interference: design issues in multimedia
  conferencing systems - M.A.Sasse, M.J.Handley & N.Ismail, 1992

inet92.ps.Z
  Multimedia Conferencing: from prototype to national pilot
    - Mark Handley and Steve Wilbur, 1992

janetvid.ps.Z
  Some Multimedia Traffic Characterisation Results
    - Crowcroft et al, 1992





----- End Included Message -----


From rem-conf-request@es.net Tue Mar 23 06:09:29 1993
Date: Tue, 23 Mar 93 07:47:08 EST
From: atkinson@itd.nrl.navy.mil (Ran Atkinson)
To: rem-conf@es.net
Subject: conference control BOF
Content-Length: 2513
Status: RO
X-Lines: 54


  I would gently suggest that it is not at all wise or technically safe to
defer thinking about the security aspects of conference control mechanisms.
Our experience has uniformly been that security as an add-in doesn't work
-- one really has to at least leave security hooks in the original design.

  I'm not sure if I'll be able to attend the BOFs, but I will be at
the IETF.  Feel free to catch me in the hallway.  In the meantime,
here are some possible security concerns and high level implications
off the top of my head.

1) admitting people that you want to admit but not admitting other folks
   and not letting person A subscribe person B for the conference
   (implies authentication in your conference control protocol)

2) dropping people who want to be dropped but not responding to a forged
   drop-request
   (again implies authentication in your conference control protocol)

3) ensuring integrity of conference data 
   (strong checksum and/or a cryptographic hash; potentially too slow
    for real-time use)

4) keeping confidential who the conference participants are
   (implies encryption of conference control protocol messages, possibly
    using the IP Security Protocol that is being developed by the IETF;
    if you'd like to reuse that it would be wise to indicate this possible
    use to the IP Security Protocol WG so they design-in support)

5) keeping the conference itself confidential
   (implies encryption of the conference and has key mgmt implications;
   some re-use of the IP Security Protocol might be feasible, but it
   would be wise to describe this potential use to the IP Security Protocol
   WG as "customer input" so they add your use to their design criteria;
   be sure to mention the phrase "multicast security" to them )

6) traffic flow security (too hard and not of widespread value, suggest that
   this one be punted from the start)

  NB: Authentication without confidentiality is rumoured to be freely
exportable.  I'm NOT with that part of the government so truthfully I
don't know.  I think Jeff Case and Rob Shirey have separately looked
into this.  Someone might ask them.  SNMPv2 uses the MD5 cryptographic
hash function to provide authentication with what amount to symmetric
digital signatures; this might provide some starting point to indicate
what kinds of hooks might be needed in a conference control protocol.

Regards,

  Ran
  atkinson@itd.nrl.navy.mil

  employed by, but not speaking officially for the
    Naval Research Laboratory

From rem-conf-request@es.net Tue Mar 23 07:38:42 1993
To: confctrl@ISI.EDU, rem-conf@es.net
Subject: Re: Confctrl
Date: Tue, 23 Mar 93 15:19:14 +0000
From: Jon Crowcroft <J.Crowcroft@cs.ucl.ac.uk>
Content-Length: 3017
Status: RO
X-Lines: 63



 >Agenda for the Conference Control BOF (confctrl)

Such Distributed Systems will soon be torn apart by internal
contradictions if the revolution is not in time!

1. In the Prepare Project, we are trying to a management system for 
build a Virtual Private Network. This is a distributed system that
spans many management domains (called customer premises networks,
public networks and others), VPN MIS. It then hides the internals of
the private and public networks, and provides a single service access
point for signalling, management etc Our test application is
Multimedia Conferencing.

2. In MICE, we are trying to build distributed access to a centralised
Conference Management and Multiplexing Centre (CMMC), with a unified
interface to "segment managers" each of which need to be setup to
build the paths from source to destination sets of users.

However, according to 1, you dont get any paths at all until the management has
been asked to set them up, while 2 would view the control paths as present,
even if the data (e.g. video/audio) ones aren't yet, so the CMMC can
set them up....

i.e. we have a deadly embrace - the distributed net managers can only find 
out what paths to create if the distributed application tells them.
THe distributed application can only build its paths if it can find
out the path attributes required by other members of the distributed 
conferences... but it can't see them to ask them!

One solution is to permit low bandwidth control traffic i.e. the
'signalling channel' MUST be open all the time, and must carry modest
amounts of distributed applciation information between peer entities
in the conferencing system AS WELL AS management information...

However another solution that we hope to follow in the later stages of PREPARE,
but have already started on (re:COOPARE), is off loading some of the 
conferencing  system functionality onto the VPN. Initially by incorporating 
what is  currently in the CAR directory service as VPN services, 
i.e. registration of users and endpoints. 
This then could be expanded to include information on the
capabilities of the multimedia equipment and applications at each workstation
(in some common representation) to ensure connections with suitable QoS 
are established (the VPN will already know about network resources). 

The plan is for the creating user to request the conference be established
with media quality specified in freindly terms, e.g. "full speed monochrome
video" or "CD quality audio", or more likely in categories that the user
quickly become used to, e.g, high, medium, low quality. This then sets bounds
within which the VPN tries to supply the best QoS bearing in mind (processor?)
what is available as both network resources and workstation capabilities.

Another view of this paradox is that the B-ISDN community is
telco/connection-obsessed while the Internet community isn't:-)

comments...

cheers
jon

coming soon - the public src release of the Car Meta-Conferencing System!
watch this space.

From rem-conf-request@es.net Tue Mar 23 09:18:28 1993
To: rem-conf@es.net
Subject: multicast and audio on solaris (i.e. vat...)
Date: Tue, 23 Mar 93 16:46:57 +0000
From: Jon Crowcroft <J.Crowcroft@cs.ucl.ac.uk>
Content-Length: 227
Status: RO
X-Lines: 10


if anyone cares, simple audio and multicast prgrams compile and run
under solaris

it is, however, a serious pain i nthe neck getting ANYTHING going on
this so called operating system...

so i wouldnt hold your breath:-)

jon

From rem-conf-request@es.net Tue Mar 23 12:35:46 1993
Date: Tue, 23 Mar 93 15:14:27 EST
From: Chip Elliott <celliott@BBN.COM>
To: rem-conf@es.net
Subject: Security & Conferences
Content-Length: 953
Status: RO
X-Lines: 27


I suggest that we NOT tackle security issues in this round
of Conference Control work.

Of course, Ran is right. Security can't be added after the
fact. An insecure conferencing system will not evolve into
a secure one.

But look. We've got enough issues on our plate as it is.
There are several major styles of conference control to choose
from, and I don't think anyone has a good handle on resource
negotiation, media synchronization issues, etc. These issues
are central to conference control and are more than enough
for us to wrestle with.

Let's try to sort out a "baby" scheme for conference control,
and try to make a framework in which we can experiment.
I think we'll find this to be an extremely hard problem,
and I'm not sure we'll even succeed at this limited task.

Trying to solve every problem in the first attempt is a sure
recipe for muddle, non-action, and "death by a thousand features".

Cheers,

	Chip Elliott		celliott@bbn.com


From rem-conf-request@es.net Tue Mar 23 13:34:49 1993
Date: Tue, 23 Mar 93 16:26:51 EST
From: atkinson@itd.nrl.navy.mil
To: rem-conf@es.net
Subject: Conference Control & Security
Content-Length: 653
Status: RO
X-Lines: 18


  Chip Elliot and I might be violently agreeing here, but I'm not
sure, so let me try to clarify my own beliefs on this:

  I think it would be a mistake to neglect security during the current
discussions and design phase.

  I think that the current focus should be on the more operational
issues, with appropriate attention paid to identifying areas that have
security implications (I've made a start on this) and appropriate
attention paid to providing some hooks for security.  

  Security design must be handled integral with the general design,
though one shouldn't let it consume all of one's time or resources.

Ran
atkinson@itd.nrl.navy.mil


From rem-conf-request@es.net Tue Mar 23 15:33:18 1993
X-Charset: MACINTOSH
To: rem-conf@es.net (rem-conf)
From: Paul_Lambert@poncho.phx.sectel.mot.com (Paul Lambert)
Date: Tue, 23 Mar 1993 16:32:11 MST
Subject: Re: >conference control BOF
Content-Length: 2133
Status: RO
X-Lines: 47

        Reply to:   RE>>conference control BOF

The charter for the IPSEC WG is to provide security for client protocols of
IP.  This will include combinations of authentication, confidentiality,
integrity, and access control.  It looks as if the  IPSEC WG should
specifically consider the implications of conferencing.  I'll be at the
first BOF (Tuesday) and can carry any specific requirements to the IPSEC WG.


I've spent some time working on conference call setup for ISDN environments.
 Much of this work does not translate well to a packet environment. 
However, some of the basic security partitioning is applicable.  All the
basic cryptographic security services should be able to be provided by other
protocols (perhaps from the IPSEC WG).  

The conference control signaling needs to ensure that complete information
about the state of the conference is available to any interested
participant.  This is a security requirement in the sense that individual
systems must be able to make access control decisions based on the complete
conference configuration.  The authenticity and integrity of this
information must be guaranteed, but these assurances do not have to be
provided by the conference signaling.

The implementation of these mechanisms will depend on the topology of the
conference control.  In a distributed scenario the interactions for adding a
participant may require a consensus mechanism.  The use of a centralized
conference controller provides a simpler model since all participants can be
considered subservient to the master.

IMHO the conference control work should not pursue any cryptographic
security mechanisms.  Where cryptographic services are needed other
protocols should be identified or developed separate from the conference
signaling.  The conference control work should include signaling to
distribute the state of the conference to facilitate access control
decisions.  


Paul A. Lambert

Motorola, Inc. 
M/S AZ49-R1209          Internet: Paul_Lambert@email.mot.com
8220 E. Roosevlt Rd.    Phone:    +1-602-441-3646
Scottsdale, Az.         Fax:      +1-602-441-8377
     85257  USA




From dvtf-request@es.net Tue Mar 23 19:21:26 1993
Date: Tue, 23 Mar 93 19:19:20 PST
From: ari@es.net (Ari Ollikainen)
To: rem-conf@es.net
Subject: AT&T video-conference tool for desktop computers
Cc: dvtf@es.net, rcwg@nic.hep.net
Content-Length: 5330
Status: RO
X-Lines: 91

Although the following seems to describe circuit based PC add-on, I wonder
if it's REALLY CCITT H.320 standards family compliant... Interesting to 
see AT$T pick such a high price point...

--Ari

----------------------------------------------------------------------------

        NEW YORK (UPI) -- American Telephone and Telegraph Co., took a major
step toward integrating communications and computing Monday with the
announcement of new products that will enable computer-to-computer video
communications hookups.
        AT&T's new products include a desktop personal-computer-based video
system that lets users see each other and collaborate on computer files
while they talk.
        People at several locations can join in on a single video call,
making in-house video-conferencing simpler and less expensive.
        The new AT&T Visual Solutions products and services not only let all
parties to the conversation see each other; they also allow participants
in the computer-to-computer video calls to edit files -- including
documents, spreadsheets and presentations -- on-line.
        ``This system will allow people to work together like they were in
the same room,'' an AT&T spokeswoman said.
        The company pledged that its Visual Solutions products and services
for business will be based on globally accepted standards for video
calls over digital phone lines.
        By pegging the new products and services to existing industry
standards, the new AT&T Visual Solutions will be able to work with other
products designed to meet the same standard.
        ``Businesses that might have been uncertain about investing in visual
communications equipment can rest assured now, knowing these products
will work with future products based on the same standards,'' said
Jerrod Stead, president of AT&T's Global Business Communications Systems
unit. ``In other words -- 'come on in, the water's fine.'''
        Desktop-computer users, working on systems running the Microsoft
Windows graphical environment, can use the new AT&T Personal Video
System Model 70 to see each other in a scalable video window on their
personal computers as they talk. The system also allows them to
simultaneously collaborate on computer documents.
        They can compare files and collaborate on charges in files and
documents by making notations or modifying files on the screen.
        A process that once would have taken days, with documents being sent
back and fourth between parties, now can be completed in a few minutes.
        Users can share Windows applications, requiring that only one
personal computer hooked into the video call have the necessary
application software and files.
        And call participants can make changes and annotate files with
handwritten notes, all while seeing each other in the video window while
they talk and work.
        ``People are telling us that the ability to meet with others
spontaneously and work together, despite being in different places, will
change the way they do business,'' Stead said. ``It will make them more
productive, giving them the tools to make decisions quicker and better
serve customers.''
        AT&T said its Personal Video System allows customers the benefit of
quality motion video, at 10 to 15 frames per second, at economical
transmission rates of 112 or 128 kilobits per second.
        Stead said video quality is equal to that usually associated only
with conference-room systems.
        Customers can purchase Personal Video Systems in quantities of four,
starting at $6,995 per unit, with delivery scheduled for the fall of
1993.
        The Personal Video System Model 70 incorporates a board in AT&T's
8510T ISDN Voice Terminal that makes the unit ``video-ready''. The
system will work on most computers based on 386 or 486 processors, and
includes all necessary hardware and software. Hardware includes a video
camera unit, which is mounted on top of the personal computer monitor
and provides the pictures that are transmitted to other video-conference
participants. Also included are all connecting cables to hook up the
hardware.
        The Personal Video System 70 operates through AT&T's market-leading
Definity Communications System business telephone switches or through a
digital telephone line connection to an AT&T 5ESS(R) Switch.
        AT&T also will offer promotional packages for businesses that want to
install new or upgrade existing Definity systems to add video
capabilities.
        Included in the new line of video products is a MultiPoint Control
Unit that allows businesses to manage their own video conferences
between several locations.
        Customers can link up to 24 different conference sits for every
MultiPoint Control unit.
        The MultiPoint Control Unit will be available in the fourth quarter
of this year, with prices ranging from $60,000 to $200,000, depending on
features and the size of the unit.
        That may seem pricey, but conference-room type video-conferencing
systems can run into the millions of dollars. An AT&T spokesman said the
new system will make video conferencing more affordable -- and attractive
-- to small and medium sized businesses, and for smaller working groups
within large corporations.

----------------------------------------------------------------------------

From rem-conf-request@es.net Wed Mar 24 14:53:30 1993
Date: Wed, 24 Mar 1993 14:12:54 -0800
From: schooler@ISI.EDU
Posted-Date: Wed, 24 Mar 1993 14:12:54 -0800
To: rem-conf@es.net
Subject: Confctrl template for Touring Machine project
Cc: schooler@ISI.EDU
Content-Length: 10886
Status: RO
X-Lines: 222


----- Begin Included Message -----

>From abel@thumper.bellcore.com Wed Mar 24 12:24:46 1993
Date: Wed, 24 Mar 1993 15:22:14 -0500 (EST)
From: Abel Weinrib <abel@thumper.bellcore.com>
Content-Type: text/plain; charset=US-ASCII
To: confctrl@ISI.EDU
Subject:  Confctrl template for Touring Machine project


		    Conference Control BOF Template
		    -------------------------------

1. Name of project, program and/or protocol.

	Touring Machine Project

2. Contact person, affiliation and e-mail address.

	Abel Weinrib
	Bellcore (Network Systems Research Dept.)
	abel@bellcore.com

3. Target operating environment and key design considerations:

The Touring Machine project is studying the design and realization of a
robust software control infrastructure that enables a broad class of
multimedia communications applications.  Through its applications
programming interface, the Touring Machine platform supports
abstractions that shield an application designer from the details of
routing, resource allocation, presentation control, session control and
network and system management, and provides useful services such as
access to directories containing both static  and dynamic system
information, authentication, security, and session negotiation.

Currently, the Touring Machine platform is the basis for several
communications tools, including the CRUISER (TM) service and shared data
applications based on the RENDEZVOUS (TM) system.  The CRUISER service
is a multimedia communications application designed to support informal
communications among  remotely located co-workers, as well as
participation in seminars through a "virtual auditorium" service.  The
Touring Machine platform also supports mobile users, using active badges
from Olivetti Research Labs, Cambridge. 

The current realization of the system controls analog audio and video
switches within two Bellcore locations, with analog audio and video
hardware on about 150 users' desktops.  While unsophisticated in terms
of transport technology, this choice allows the support of a large and
active user population, and has allowed our effort to be concentrated on
developing the software infrastructure and multimedia applications.  The
two locations are connected by H.261 codecs operating on a T1 circuit. 
Control messages and "data" media streams are carried over Bellcore's
Internet.  Ongoing enhancements include the addition of integrated
packet transport of all media types.  There are also plans to create a
version of the current system accessible via ISDN.

The next iteration of system design is currently underway.  It addresses
system structuring principles for extensible, open, managed systems, as
well as expanding the functionality of the API. The design borrows
heavily from the architectural principles of Bellcore's INA project; the
system is designed as a distributed object-oriented system based on
trading.  Research continues in the areas of reliability, extensibility,
support for hybrid analog/digital fabrics,  interworking across multiple
administrative domains, safeguards for an "enterprise model" allowing
third-party service providers, and integrated systems management,
including fault management and accounting. 


4.a Type of conference styles supported by your system/protocol.

The Touring Machine session protocol supports controlled multimedia
conferences, with separate control over the media used.  One user
initiates the session, inviting others to join.  Once established,
additional users may be invited to join, and current members may leave. 
The session allows the setting of various policies regarding addition of
new members to the conference and changing of other session state such
as the transport topology.  For instance, one session policy allows any
user to join a session without action required on the part of the
current members; another specifies that only the initiator may initiate
changes.  Multicasting of talks from conference rooms, with browsing of
a list of the available conferences by a user, is also supported.

4.b Profile of user community: 

 Various applications have been built on the Touring Machine platform. 
The CRUISER(TM) application is currently being used on a day-to-day
basis by about 150 people within Bellcore spread over two locations. 
The application supports multiparty, multimedia conferencing, and
connection to seminars and other broadcast sources.  The application was
developed with significant thought being given to usability.  The users
are computer-literate, but expect the system to work without thought on
their part.  Within a location the media quality is excellent (an
advantage of using analog transport); between locations the quality of
the H.261 codecs, especially regarding roundtrip delays, is not so good.

The current system supports impulse calling from a user's desktop; no
advanced reservation is required (or allowed).  Thus, it is used most
often for spontaneous conversations rather than for formal meetings;
however, these conversations are often lengthy.  The lack of use for
formal meeting may also has something to do with a majority of the users
being in one location and the relatively poor quality of the audio and
video connecting the two locations.  (Bellcore's video window technology
also connects the two locations, and appears preferable for formal
meetings with multiple participants.)

5. Architecture assumptions: 

The Touring Machine platform sits above network transport, providing a
suite of useful services to the creators of multimedia communications
applications.  These services include authentication of users, a
directory service containing system information (such as the users of
the system and the current sessions in the system), a session service,
and others.  As such, the system model is quite network-centric, with a
significant role to be played by the Touring Machine infrastructure. 
The implementation of the system is distributed, in that the Touring
Machine software is structured as a set of distributed objects working
co-operatively to provide the services supported by the API.  The
current implementation allows for a single administrative domain;
extensions in progress aim to support multiple administrative domains,
with protocols between the domains to realize resource allocations and
directory services across the multiple domains.

The session service supports an abstract session object, which
encapsulates a control relationship among its members that is separate
from the topology of the actual media streams that make up the
communications.  Thus, a user may be a member of a session without being
involved in the media streams--for instance to control the conference. 
The session is the site of negotiation about the transport topology for
the communications, specified in an abstract manner.  In addition, the
session provides support for negotiation about session membership and
session policy such as who may find out about the session and who may
change the session's state.  

6.a How do you define conference control?  

I like Eve's definition:
	The management and coordination of multiple sessions, and
	their multiple users, in multiple media.  

6.b Conference control functionality supported.

A suite of services made available to the multimedia communications
application programmer by the Touring Machine platform.  These include:
- User authentication.
- (Simple) negotiation for initiating and changing session membership,
policies, and topologies for transport in multiple media.  
- The transport topology is specified using high level abstractions,
thus hiding the transport details from the applications programmer.
- A user may participate in multiple simultaneous sessions, limited only
by the resources they have available to display the media streams. 
Abstractions are provided to manage a user's network access resources to
allow independent suspending and resuming of multiple sessions.
- Access to directory service containing dynamic and static system state
information.
- Support for user mobility using active badge technology.

6.c Other control functions you would like to support.

More powerful control over the presentation of the media streams to the
user (i.e., the particular type of video bridging function).

7. Conference control protocol details: 

Set up is explicit through the session protocol between applications and
the Touring Machine system.  There is separation of membership in a
session from the definition of the actual transport topology for the
various media streams.  The transport topology is defined abstractly,
and can be quite general.  The state of a session is maintained by the
Touring Machine system; the session protocol allows applications to
maintain a view of the session state.

8. Hardware/software platforms.  

Various analog computer-controlled audio-video switches, bridges and mixers.
Touring Machine platform runs over UNIX(R) on Sun, DEC, and Next
workstations.  
The applications that have been written run on UNIX with X-windows(TM)
based GUIs.

9.a What specifically has or has not been implemented?

The current version of the system is fully operational and supports many
users on a day to day basis.  

9.b What was unexpectedly easy or difficult to implement?

The system is inherently asynchronous, with multiple users able to
initiate changes at any time to, e.g., the session state.  Thus, it is
impossible for application programs to maintain an exact view of the
state, and it is quite difficult for them to act appropriately in all
cases when their view is incorrect.

The Touring Machine platform controls a significant number of network
resources such as bridges and trunks.  Resource loss is a serious
concern, arizing from hardware and software faults and user actions
(such as killing off an application). Because the system is implemented
in a distributed fashion, recovery of lost resources is cumbersome.  The
next version of the system will include more fault tolerance, with an
emphasis on protocols that recover lost resources.

We have found supporting good quality multiparty audio, even in the
analog domain, to be troublesome.  While usable, it is not like being in
the same room--a relaxed multiparty conversation is not really possible.
 Also, between locations the delays introduced by the H.261 codecs leads
to noticeable and irritating echos.

9.c What might you change as a result?

The effort currently underway on designing the next version of the
system addresses some of the issues raised in 9b, as well as many others
associated with building an extensible, open, managed systems and with
expanding the flexibility and functionality of the API.

10. If available, suggested readings about your work on confctrl.

 "The Touring Machine System", Arango et. al., CACM January 1993.



----- End Included Message -----


From rem-conf-request@es.net Wed Mar 24 19:32:07 1993
Posted-Date: Wed 24 Mar 93 19:18:40-PST
Date: Wed 24 Mar 93 19:18:40-PST
From: Stephen Casner <CASNER@ISI.EDU>
Subject: IETF meeting audio/video multicast
To: rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@MMC.ISI.EDU>
Status: RO
Content-Length: 7935
X-Lines: 165

The Columbus IETF meeting next week marks the fourth time we plan to
multicast live audio and video from the plenary and some working group
sessions out across the Internet.  This time the OARnet folks and IETF
Secretariat folks are handling most of the work, which is much appreciated!
We will attempt to transmit two simultaneous working group sessions again
this time, since some network problems have been resolved since last time.
Included below is the "IETF TV Guide".

To receive this multicast, you must have an audio-capable workstation
(SPARC, SGI, DEC 5000) with IP multicast software added to the operating
system.  You must also be connected to the semi-permanent virtual IP
multicast network we've dubbed the MBONE.  Since this effort has been going
on for some time now, we hope that all those who are interested have
already arranged to be connected.  If not, contact your network provider to
see if they are providing MBONE connections, but please don't be
disappointed if they can't respond by next week.

More information about the MBONE, including what hardware and software is
required to receive the multicast, is available by anonymous FTP from
venera.isi.edu in the file mbone/faq.txt.  There is also an article about
the first audiocast is in pub/ietf-audiocast-article.ps.

For each of the two channels, "IETF Channel 1" and "IETF Channel 2", there
may be three concurrent multicast streams:

    - low-rate audio for those with slow links (GSM encoding; approx 16 kbps)
    - "normal"-rate audio (PCM encoding; approx. 70 kbps)
    - video (one of two encodings; approx. 25 to 128 kbps)

Each of these streams will be be advertised as a session in the LBL Session
Directory tool, sd, which can be used to automatically invoke the audio and
video programs with the appropriate address and TTL parameters.  For those
who want or need to invoke the programs manually, these parameters are
listed as an appendix to this message.  The audio will be originated by the
vat program from LBL; you may use vat or any other vat-compatible
application to listen in and talk back.  For video, we will use the new
version 3.0 of the nv program from Xerox PARC (about to be or just released).

Following is the tentative schedule of plenary meetings and working group
sessions to be transmitted; to interpret the acronyms, see the IETF Agenda.
We haven't yet checked with all of the listed working group chairs to see
if they agree to having their meetings multicast; if any named group does
NOT want to have their meeting transmitted, it will be omitted.  This
schedule is also subject to Murphy's Law.


------------------------------------------------------------------------------
IETF TV Guide
(Times are Eastern Standard Time, GMT - 0500)


 MONDAY    0900-0930     0930-1200     1330-1530     1600-1800    1930-2200
--------+-------------+-------------+-------------+-------------|------------+
 CHAN 1 |intro plenary|tech. plenary|   osids     |    ipae     |  bigaddr   |
----------------------+-------------+-------------+-------------+------------+
 CHAN 2 |      "      |      "      |             |    tuba     |  vcrout    |
--------+-------------+-------------+-------------+-------------|------------+


 TUESDAY   0900-0930     0930-1200     1330-1530     1600-1800     1930-2200
--------+-------------+-------------+-------------+-------------+------------+
 CHAN 1 |tech. plenary|   confctrl  |     avt     |     avt     |   rtqos    |
--------+-------------+-------------+-------------+-------------+------------+
 CHAN 2 |      "      |     sip     |     pip     |     pip     |            |
--------+-------------+-------------+-------------+-------------+------------+


WEDNESDAY  0900-0930     0930-1200     1330-1530     1600-1800     
--------+-------------+-------------+-------------+-------------+
 CHAN 1 |tech. plenary|     avt     |   confctrl  |   mobileip  | 
--------+-------------+-------------+-------------+-------------+
 CHAN 2 |      "      |     sdr     |     sip     |   vcrout    |
--------+-------------+-------------+-------------+-------------+


THURSDAY   0900-0930     0930-1200     1330-1530     1600-1800     1930-2200
--------+-------------+-------------+-------------+-------------+------------+
 CHAN 1 |tech. plenary|   mobileip  |    mospf    |tech. plenary|open plenary|
--------+-------------+-------------+-------------+-------------+------------+
 CHAN 2 |      "      |             |             |      "      |     "      |
--------+-------------+-------------+-------------+-------------+------------+


 FRIDAY    0900-1200
--------+-------------+
 CHAN 1 |    idmr     |
--------+-------------+
 CHAN 2 |             |
--------+-------------+

------------------------------------------------------------------------------

Each day's program may also be replayed by tape delay from 2300 to 0800.

------------------------------------------------------------------------------

The criteria used to select the sessions to be transmitted were the following

	- all plenary sessions
	- sessions related to A/V teleconferencing or multicast
	  (avt, confctrl, mospf, idmr)
	- sessions related to IPv7 (pip, tuba, ipae, sip)
	- miscellaneous sessions, by request


Advice for participants:

Please keep your microphones muted and your video transmissions disabled
during the plenaries and working group sessions, unless or until invited to
respond by the chair of the session.

Vat users can disable reception of accidental sources of audio multicasts
(such as people who forget to mute their mics) by clicking in the box next
to that source's name.

During plenary sessions, when there is only one audio stream, we may run
two video streams on the same video multicast channel, one with a higher
than normal TTL, to allow people who would normally receive only two audio
streams (i.e., threshold 128) to receive one audio plus one video instead.

Any comments or reports of problems should be emailed to rem-conf@es.net.
However, we cannot promise to respond immediately, or at all, to any
comments or problem reports; it will be a very busy week for all of us.


Steves Casner and Deering, and the MBONE/IETF-TV crew

Appendix:

Following is a table of the multicast addresses, ports, and TTLS to be used
with those programs, for each type of multicast stream.  (The "thresh"
column is meaningful only to those people configuring multicast tunnels,
who should already know what it means.)  If you intend to talk back or to
transmit your own video to the channel (with permission of the session
chair), PLEASE be sure to use the right TTL for your channel!

  Table of Multicast Addresses, Ports, TTLs and Thresholds for Mar. '93 IETF
  --------------------------------------------------------------------------

                        peak rate   address      port(s)        ttl     thresh
                        ---------  ---------    ----------      ---     ------
IETF chan 1 audio, GSM   16 kbps   224.0.1.10   4100, 4101      255       224

IETF chan 2 audio, GSM   16 kbps   224.0.1.13   4130, 4131      223       192

IETF chan 1 audio, PCM   70 kbps   224.0.1.11   4110, 4111      191       160

IETF chan 2 audio, PCM   70 kbps   224.0.1.14   4140, 4141      159       128

IETF chan 1 video, nv   128 kbps   224.0.1.12   4444            127        96

IETF chan 2 video, nv,  128 kbps   224.0.1.15   4444             95        64

------------------------------------------------------------------------------

The IETF multicast addresses, above, are also registered in the Domain Name
System, with these names:

                224.0.1.10   IETF-1-LOW-AUDIO.MCAST.NET
                224.0.1.11   IETF-1-AUDIO.MCAST.NET
                224.0.1.12   IETF-1-VIDEO.MCAST.NET
                224.0.1.13   IETF-2-LOW-AUDIO.MCAST.NET
                224.0.1.14   IETF-2-AUDIO.MCAST.NET
                224.0.1.15   IETF-2-VIDEO.MCAST.NET
-------

From rem-conf-request@es.net Thu Mar 25 09:19:13 1993
To: rem-conf@es.net
Cc: rlang@NISC.SRI.COM
Subject: Re: multicast and audio on solaris (i.e. vat...)
Date: Thu, 25 Mar 93 09:02:45 PST
From: Ruth Lang <rlang@NISC.SRI.COM>
Content-Length: 622
Status: RO
X-Lines: 29


Does anyone have any additional information or war stories to share
about actual use of multicasting/audio on Solaris 2.x?  If so, please
do.

Thanks,

Ruth Lang

------- Forwarded Message

Date:    Tue, 23 Mar 93 16:46:57 +0000
From:    Jon Crowcroft <J.Crowcroft@cs.ucl.ac.uk>
To:      rem-conf@es.net
Subject: multicast and audio on solaris (i.e. vat...)


if anyone cares, simple audio and multicast prgrams compile and run
under solaris

it is, however, a serious pain i nthe neck getting ANYTHING going on
this so called operating system...

so i wouldnt hold your breath:-)

jon

------- End of Forwarded Message


From rem-conf-request@es.net Thu Mar 25 11:39:00 1993
Date: Thu, 25 Mar 1993 14:07:14 -0500
From: mandrews@alias.com (Mark Andrews)
To: rem-conf@es.net
Subject: Description of sd, vat and ivs
Content-Length: 309
Status: RO
X-Lines: 9


I just grabbed the SGI version of sd, but am not quite sure to do with it,
>From the discussions on this list, there are these audio-video tools which
include sd, vat and ivs. Is there a paper or any other description of the
usage and aims of these tools? Perhaps in the archive of this list?

Thanks,

Mark

From rem-conf-request@es.net Thu Mar 25 13:01:12 1993
Date: Thu, 25 Mar 1993 12:52:02 -0800
From: schooler@ISI.EDU
Posted-Date: Thu, 25 Mar 1993 12:52:02 -0800
To: rem-conf@es.net
Subject: Confctrl Templates
Cc: schooler@ISI.EDU
Content-Length: 331
Status: RO
X-Lines: 8


Is there anyone out there who would care to comment on their experience
specifically with a loose-control session management scheme?  So far, 
responses have come in from individuals who mostly have designed tight-control 
solutions.  Ironically, most of the tools regularly in use over the 
MBONE have the former structure. 

E.

From rem-conf-request@es.net Thu Mar 25 17:20:09 1993
Date: Thu, 25 Mar 1993 16:57:15 PST
Sender: Ron Frederick <frederic@parc.xerox.com>
From: Ron Frederick <frederic@parc.xerox.com>
To: rem-conf@es.net
Subject: nv version 3.0
Content-Length: 883
Status: RO
X-Lines: 22

Hello everyone...

You can now find nv version 3.0 available for ftp. In addition to sources,
binaries are available for the Sun 4, SGI, and DECstation platforms. All
of these files are on parcftp.xerox.com, in /pub/net-research:

-rw-r--r--  1 frederic   730691 Mar 25 16:44 nvbin-3.0-dec5k.tar.Z
-rw-r--r--  1 frederic   608263 Mar 25 16:49 nvbin-3.0-sgi.tar.Z
-rw-r--r--  1 frederic  1541287 Mar 25 16:16 nvbin-3.0-sun4.tar.Z

-rw-r--r--  1 frederic    99059 Mar 25 16:49 nvsrc-3.0.tar.Z

This is the version of nv which will be used for the IETF videocast. Note
that it is not backwards compatible with nv 2.x. Please try and get rid of
the old versions you have lying around...

If you have any problems with the new version, let me know. I'll be
flying to Columbus on Saturday, but I'll do my best to stay in email
contact once there.
--
Ron Frederick
frederick@parc.xerox.com

From rem-conf-request@es.net Thu Mar 25 19:44:41 1993
Date: Thu, 25 Mar 1993 19:32:36 PST
Sender: Ron Frederick <frederic@parc.xerox.com>
From: Ron Frederick <frederic@parc.xerox.com>
To: rem-conf@es.net
Subject: SCSI frame grabbers for UNIX machines
Content-Length: 1776
Status: RO
X-Lines: 38

Hi everyone...

When I was checking on that Mac frame grabber that was mentioned
here on the list, I managed to pick up some info about a higher-end
version of the same product which may be better suited to talking to
UNIX boxes. It's a bit more expensive ($1500-3000 list, depending on
configuration options), but also performs better...

The company is:

	IVA Corporation (Intelligent Video Applications)
	P. O. Box 95
	Wayland, MA 01778
	(508) 358-4782

The product is "PixLink", and comes in three flavors -- SCSI, Ethernet
talking Netware, and Ethernet talking NFS. The basic model in all cases
is that the box pretends to be a disk drive, with special places you can
write for control and read for status & data. The SCSI version can either
be accesses as a raw block device, or actually as a device with a real
filesystem on it. It has a real hard disk in it in the latter configuration,
and you can do things like continuous capture of short sequences to that
disk even if you can't read all the bits over the SCSI that fast.

Normally, the box sends either 8 bit greyscale, 24 bit color, or 8 bit color
(with a LUT you can download into it, to match whatever your real
display hardware is using). It can also be configured to send JPEG data,
which greatly reduces the data rate required for full motion video.

I haven't actually seen one of these boxes yet, but they seem like they
could be an excellent choice for portability across platforms. It sounds
like the software required to actually set them up & capture frames
would be incredibly easy to write. While the SCSI & Ethernet transfer
rates aren't quite good enough to do 30fps full size, it should let you do
around 10-15fps in greyscale or 8 bit color at 256x240.
--
Ron Frederick
frederick@parc.xerox.com

From rem-conf-request@es.net Fri Mar 26 03:40:09 1993
To: rem-conf@es.net
Cc: ivs-users@jerry.inria.fr
Subject: new IVS release 3.0
Date: Fri, 26 Mar 93 12:33:24 +0100
From: Thierry TURLETTI <Thierry.Turletti@sophia.inria.fr>
Content-Length: 3426
Status: RO
X-Lines: 95



The ivs version 3.0 is now available by anonymous ftp from 
avahi.inria.fr (138.96.24.30) in /pub/videoconference:

sources:
-rw-rw-r--  1 turletti   322503 Mar 26 11:46 ivs-src.tar.Z

binaries:
-rw-rw-r--  1 turletti  3955929 Mar 26 11:56 ivs-dec5000.tar.Z
-rw-rw-r--  1 turletti  2709917 Mar 26 12:08 ivs-sun4-vfc.tar.Z

I'll add the following binaries as soon as I receive them from other sites. We
only have here Sparcstations + VideoPix boards. Thanks in advance.

- sparc and Parallax board
- SGI + Indigo
- decstation + VIDEOTX
- HP + RasterRops

I'd like to thank Tom Sandoski, Joe Ragland, Jian Zhang and Pierre Delamotte 
who helped me to set up this new version.

Enclosed is a brief description of changes:

**************************************************************************
                                 WARNING 

        THIS VERSION IS NOT COMPATIBLE WITH PREVIOUS IVS VERSIONS
**************************************************************************

* The packets formats have been changed according to the two
  following Internet Drafts:

    - "A Transport Protocol for Real-Time Applications", H. Schulzrinne,
    AVT working group, 12/15/92.

  and

    - "Packetization of H.261 video streams", C. Huitema & T. Turletti,
    AVT working group, 3/8/93.


* For VideoPix only, Square pixels are now correctly processed in
  B&W mode and real CIF (352x288pels) is obtained for both PAL and
  NTSC video streams.

* For VideoPix only, brightness and contrast tuning is added
  at the coder side. The last IETF retransmission showed that such
  a tuning is useful when the scene to display is dark.


* A new optional mode, "Avoid Feedback" is added. This mode must
  be chosen when there are a lot of decoding stations (more than
  10). Video decoders won't send to the video coder Negative
  Acknowledges neither Full Intra Requests. This option limits the
  number of successives INTER encodings blocks and forces the INTRA
  encoding mode more frequently. Selection is done in the "Rate
  Control" menu, video coder side.


* Brightness and Contrast tuning are now managed by the video
  decoders processes. A tuning popup appears when you click on a
  decoding window.


* Packets length is now limited to 1000 bytes. If a GOB is larger
  than this limit, it will be sent in several packets. If a part
  of the GOB is lost, the video decoder process can resynchronize
  itself with the next GOB received. Resynchronization facility
  has been added to the video decoder.

* A decode_h261 program is added to decode an H.261 encoded file. It allows
  to run the h261_decode routine without the session manager. It should be
  useful while testing the interoperability between ivs and Hardware H.261 
  codecs. The H.261 encoded sequence "Miss America" is included in the 
  examples directory.

* For VideoPix, with cameras only, a color encoding/decoding mode is now
  available thanks to a collaboration with Pierre Delamotte from
  INRIA Rocquencourt, (delamot@wagner.inria.fr). The grabbing
  procedures for others plateforms are not available yet.


* IVS is now supporting the VIDEOTX framegrabber for DecStations,
  thanks to Jian Zhang from CSIRO/Joint Research Centre in
  Information Technology Flinders University of South Australia,
  (jian@jrc.flinders.edu.au).


If you have any problems with this new version, please let me know.


Thierry Turletti
turletti@sophia.inria.fr

From rem-conf-request@es.net Fri Mar 26 10:08:38 1993
Date: Fri, 26 Mar 93 12:55:34 EST
From: hgs@research.att.com (Henning G. Schulzrinne)
To: rem-conf@es.net
Subject: Nevot 1.3
Cc: arc@sgi.com
Content-Length: 981
Status: RO
X-Lines: 28

Nevot version 1.3 has been made available for anonymous ftp from
gaia.cs.umass.edu:~ftp/pub/nevot:
  nevot1.3.tar.Z     sources
  nevot1.3.sgi.tar.Z    binaries for Silicon Graphics
  nevot1.3.sun4.tar.Z   binaries for Sun4c/m (SunOS 4.1.x)

Libraries for SGI and Sun are in the nevot/lib.sgi and nevot/lib.sun
directories.

The release is barely beta, so you may want to keep an earlier version
around. The CHANGES files summarizes some of the changes that were made. 
See the README file for installation instructions.

*** NOTE: Since the command interface has changed, delete all current
.nevotinit files. ***

The source distribution has been compiled on Solaris 2.1, but has
received minimal testing.

Thanks to Andrew Cherenson for extensive help in testing alpha
versions with amazing turn-around time.


---
Henning Schulzrinne (hgs@research.att.com)
AT&T Bell Laboratories  (MH 2A-244)
600 Mountain Ave; Murray Hill, NJ 07974
phone: +1 908 582-2262; fax: +1 908 582-5809

From rem-conf-request@es.net Fri Mar 26 10:18:52 1993
Date: Fri, 26 Mar 1993 09:46:56 -0800
From: schooler@ISI.EDU
Posted-Date: Fri, 26 Mar 1993 09:46:56 -0800
To: rem-conf@es.net
Subject: Re: Template for confctrl BOF
Cc: schooler@ISI.EDU
Content-Length: 4524
Status: RO
X-Lines: 187


----- Begin Included Message -----

>From Don.Hoffman@eng.sun.com Fri Mar 26 08:34:21 1993
Date: Fri, 26 Mar 93 08:34:07 PST
From: Don.Hoffman@eng.sun.com (Don Hoffman)
To: schooler@ISI.EDU
Subject: Re: Template for confctrl BOF
X-Sun-Charset: US-ASCII
Content-Length: 4199
X-Lines: 171


Eve,

Here is a brief description of the Sun COCO project.

Don

----------------------------------------------------------------------

		    Conference Control BOF Template
		    -------------------------------

1. Name of project, program and/or protocol.

	Sun Microsystems' COCO project

2. Contact person, affiliation and e-mail address.

	David Gedye, Principle Investigator, gedye@sun.com

3. Target operating environment and key design considerations:

   - WAN vs LAN 

	Both -- 0.5 T1 --> LAN speeds.

   - Digital vs analog

	Digital

   - The kinds of collaborative media used in your system 
     (e.g., real-time audio, video, animations, landsat images) 

	Audio, video, shared screen images, shared drawing area.

   - Packet technology vs ISDN

	Packet.

   - Room-to-room vs desktop conferencing

	Desktop.


4.a Type of conference styles supported by your system/protocol.

	Small group (<5) informal collaboration. Phone call model
	of conference initiation.

4.b Profile of user community: 

   - Expertise level 

	Only field-tested on software engineers (see publications).

   - Formality of meetings

	Informal.

   - Demand for quality of service

	Low, but could use better.

   - Mechanisms for scheduling/reservation of system

	None -- so few participants that load limiting was unnecessary.
	Scheduling done through other mechanisms

5. Architecture assumptions: 

   - Distributed vs centralized model vs hierarchical

	One centralized application talking to proxies (called
	"appliances") on all desktop machines involved in the conference.

   - System component(s) responsible for conference control

	A dedicated application, run by caller, sets up conference parameters
	(who is in it, media, bandwidth...)

   - Degree of homogeneity in end-system capabilities 

	High (All Sparcstations with "DIME" boards)

   - Multicast integration

	Yes, but only used to optimize point/point connectivity.  Not
	required.

   - Directory services

	Rendezvous points are machine names.  Conference Manager maps
	user names to machine and resource names.

   - Support for quality of service

	Some (settable by user throughout conference -- frame rate, audio
	encoding)

   - Open vs closed membership (e.g., only pre-registered users)

	Doesn't support late joiners.  Membership determined by
	Conference Manager.


6.a How do you define conference control?  

6.b Conference control functionality supported.

	Since COCO uses a phone-like model for session management,
	only very basic functionality supported

6.c Other control functions you would like to support.


	

7. Conference control protocol details: 

  - Explicit vs implicit setup

	Setup and conference membership is explicitly controlled by
	the conference manager.

  - Interconnectivity of participants

	All participants can view/hear conference stream from any other
	participant.  Although CC is centralized the media streams are
	distributed directly among participants (using multicast where
	available).

  - State sharing

	State controlled by Conference Manager.


8. Hardware/software platforms.

	SunOS 4.1.X, Solaris 5.1 Sparcstations with SS audio and
	optional Ariel S56X audio card. Video via home-brew DIME board.

9.a What specifically has or has not been implemented?

	All of the above has been implemented and field tested.

9.b What was unexpectedly easy or difficult to implement?

	Because we saw this as a hard area, we kept to a very simple
	conference management.

9.c What might you change as a result?
	We *are* changing the following --
		Late joiners
		Variable QOS depending not only on load, but also on
			social factors (like who's talking now...)

10. If available, suggested readings about your work on confctrl.

	Tang, John C. and Ellen A. Isaacs, "Why Do Users Like Video?
	Studies of Multimedia-Supported Collaboration", Computer
	Supported Cooperative Work: An International Journal,
	forthcoming. Also available as Sun Microsystems Laboratories,
	Inc. Technical Report TR-92-5.

	Isaacs, Ellen, A. and John C. Tang, "What Video Can and Can't
	Do for Collaboration", Proceedings of the ACM Multimedia `93
	Conference, August 1993, Anaheim, CA, forthcoming.






----- End Included Message -----


From rem-conf-request@es.net Sat Mar 27 12:51:04 1993
Date: 27 Mar 1993 15:11:41 -0400 (EDT)
From: John Storck <STORCK@bumeta.bu.edu>
Subject: Video List
To: rem-conf@es.net
X-Vms-To: IN%"rem-conf@es.net"
X-Vms-Cc: STORCK
Mime-Version: 1.0
Content-Transfer-Encoding: 7BIT
Status: RO
Content-Length: 124
X-Lines: 7

HELP

Is this a list that deals with interactive video technology and research?

John Storck
Boston University
617/353-3366

From rem-conf-request@es.net Sun Mar 28 16:57:19 1993
Date: Sun, 28 Mar 93 16:44:04 -0800
From: arc@sgi.com (Andrew Cherenson)
To: joeb@beagle.nersc.gov, rem-conf@es.net
Subject: Re: new IP Multicast release (for SGI IRIX 4.0.x)
Status: RO
Content-Length: 576
X-Lines: 14


A version of Steve Deering and Van Jacobson's new IP multicast routing 
software compiled for Silicon Graphics IRIX 4.0.x releases is available
via anonymous FTP from ftp.sgi.com in the /sgi/ipmcast/encap directory.  

The directory contains the mrouted binary and sources, kernel object files 
for mcast routing (ip_input.o, ip_mroute.o), and Van Jacobson's in_pcb 
changes to allow multiple mcast listeners (in_pcb.o, udp_usrreq.o).
See the README for details. 

These changes have received minimal testing so be sure to save a 
working kernel and mrouted, just in case.



From rem-conf-request@es.net Sun Mar 28 21:35:00 1993
From: Jeff Hughes <jeff@col.hp.com>
Subject: Video-Conferencing
To: rem-conf@es.net
Date: Thu, 25 Mar 93 14:26:50 MST
Mailer: Elm [revision: 66.25]
Content-Length: 647
Status: RO
X-Lines: 17

Hello,

   I would like to set up my workstation so that I can monitor the
IETF sessions over the internet. Can you send instructions as to 
how I can do that?

   I have an HP9000 series 360 color workstation, and access to
the internet. I read a recent note from the IETF that gave IP addresses
and port numbers, but it seems like there must be some application 
that I need to run at my end.

--
Jeff Hughes			   	Internet: jeff@col.hp.com
Network Test Division   
Hewlett-Packard Corporation	   	Phone/Voicemail/HP-Telnet:
5070 Centennial Blvd.                         (719) 531-4777	
Colorado Springs, Colorado 80919    	Fax:  (719) 531-4505

From rem-conf-request@es.net Mon Mar 29 05:41:42 1993
To: rem-conf@es.net
Subject: SGI version of vat available
Date: Mon, 29 Mar 93 05:29:23 PST
From: Van Jacobson <van@ee.lbl.gov>
Content-Length: 293
Status: RO
X-Lines: 8

A version of vat that runs on SGI machines under IRIX 4.0.x is
available for anonymous ftp from ftp.ee.lbl.gov:sgi-vat.tar.Z.

This version has not had a whole lot of testing (~3 hours) but
I thought I'd try to get it out before IETF in case people would
like to try it.  Expect bugs.

 - Van

From rem-conf-request@es.net Mon Mar 29 07:48:30 1993
Date: Mon, 29 Mar 1993 17:29:07 +0200
From: Milan Sterba <Milan.Sterba@vse.cs>
X-Mailer: Mail User's Shell (7.2.5 10/14/92)
To: rem-conf@es.net
Subject: IETF-1-LOW-AUDIO.MCAST.NET/4100
X-Charset: ASCII
X-Char-Esc: 29
Content-Length: 471
Status: RO
X-Lines: 17


The IETF-1-LOW-AUDIO.MCAST.NET/4100 channel seems to be mute (at least
in Europe and in Australia)

Regards
Milan Sterba


-- 

======================================================================
Prague School of Economics		e-mail : Milan.Sterba@vse.cs
Computing Center			tel : +42 2 21 25 704
nam. W. Churchilla 4			home: +42 2 823 78 59	
130 67 Praha 3				fax : +42 2 235 85 09
Czechoslovakia
=======================================================================

From rem-conf-request@es.net Mon Mar 29 12:21:46 1993
Date: Mon, 29 Mar 1993 11:53:07 -0800
From: schooler@ISI.EDU
Posted-Date: Mon, 29 Mar 1993 11:53:07 -0800
To: rem-conf@es.net
Subject: Re: confctrl BOF template [ivs]
Cc: schooler@ISI.EDU
Content-Length: 4665
Status: RO
X-Lines: 151

------------ Forwarded Message ------------ 

>From turletti@jerry.inria.fr Mon Mar 29 06:03:22 1993
To: schooler@ISI.EDU
Cc: huitema@sophia.inria.fr
Subject: Re: confctrl BOF template
Date: Mon, 29 Mar 93 16:06:35 +0200
From: Thierry TURLETTI <Thierry.Turletti@sophia.inria.fr>



    
    		    Conference Control BOF Template
    		    -------------------------------
    


    1. Program name: "IVS" (Inria Videoconferencing System)


    2. Contact person, affiliation and e-mail address.

	Thierry Turletti  
	INRIA Sophia Antipolis
	RODEO Project
        turletti@sophia.inria.fr


    3. Target operating environment and key design considerations:

	* Both LAN and WAN
	* digital audio and video media
	* packet technology
	  IP multicast extensions used on top of UDP
	* H.261 software video codec.

 
   4.a Type of conference styles supported by your system/protocol.

	* Point to point. A daemon is running at each side to
	   simulate a phone call.

	* small size conference (less than 10 participants)
	   --> feedback from decoders allowed. (NACK, Full Intra Request)

	* large size conference 
	   --> no feedback from decoders


    4.b Profile of user community: 

	No expertise level required.
	Freely available in the public domain. 
    

    5. Architecture assumptions: 

	* Distributed model. A single session manager resides at each 
	  participant side. It is responsible for a particular user of 
	  audio/video encoding/decoding options.

	* The session manager must choose the correct options
          according to the type of conference and the network conditions.
	  For example, it has to decide if feedback from decoders
	  is allowed, if knowledge of the whole participants is possible.
	  The bandwidth used is tunable during ongoing sessions. 


    6.a How do you define conference control?

        Managing sessions, users in a session and media used. (Some
        media have more priority than others)
    

    6.b Conference control functionality supported.

	The following packets are currently used:

	- Description (name, audio/video encoding, feedback allowed/avoided)
	- Bye : The participant is leaving
	- Hello : A new participant requires a rapid description of
	          the other parts. Ignored if feedback is avoided.

	Feedback from decoders: avoided or allowed according to
	the number of participants in the conference. This option
	can be changed during ongoing sessions.
    

    6.c Other control functions you would like to support.

	- merging conferences.
	- QOS control to find the maximal bandwidth allowed.
	- interaction between media to privilege for example audio
	  quality rather than video.
    

    7. Conference control protocol details: 

	Implicit setup.
	  For a small conference, an "HELLO" packet can be sent to 
	  force each participant to describe itself quickly.

	Each participant send periodically to the group its state
	and keep up to date each member's current state list. A member
	is considered as having left the conference if he stops sending 
	its state.

	Robustness: When feedback information is allowed, video decoders 
	could send to their video sources NACK information and Full
	image encoding request (a kind of all refresh). In this mode,
	the first image received will be full encoded, and effects of
	packet loss reduced. 
          Else, when no feedback, the video coding will force
	encoding of all blocks more often and will limit the
	number of successive "inter" encoding blocks. ("inter"
	means that only the difference between two successive images
	is encoded. Use of the "inter" mode increases the compression rate
	but also increases sensibility to packet lost).
    

    8. Hardware/software platforms.

	X11 - Athena toolkit. IP multicast extensions required.

	Current experiments are done on Sparc, HP, SGI and DEC stations.

	Video framegrabbers used are: VideoPix, Parallax, Indigo board,
				      Raster Rops and VIDEOTX.
	

    9.a What specifically has or has not been implemented?

	All conference control functions mentioned are implemented. The
        "Avoid feedback" mode hasn't been well tested yet over WAN
 	with a lot of participants.

	The bandwidth tuning should be automatically adapted to
	the network conditions.

    
    10. If available, suggested readings about your work on confctrl.

	The following report describing the previous IVS version is
	available by anonymous ftp from avahi.inria.fr in directory
	/pub/videoconference:

	"H.261 software codec for videoconferencing over the Internet'',
	INRIA Research Report no 1834, Sophia Antipolis, January 1993.


------------ End of Forwarded Message ------------ 

From rem-conf-request@es.net Mon Mar 29 14:20:25 1993
From: field@cs.pitt.edu (Brian Field)
Subject: problems receiving multicast
To: rem-conf@es.net
Date: Mon, 29 Mar 1993 17:02:51 -0500 (EST)
Cc: field@flash.cs.pitt.edu (Brian Field)
X-Mailer: ELM [version 2.4 PL20]
Content-Type: text
Content-Length: 153
Status: RO
X-Lines: 9


Could someone who is successfully getting a feed from PSC drop me a note.  I'm
not getting anything right now...


Thanks
Brian
-----
field@cs.pitt.edu

From rem-conf-request@es.net Mon Mar 29 14:38:53 1993
From: Aydin Edguer <edguer@alpha.CES.CWRU.Edu>
Subject: IETF meeting
To: rem-conf@osi-west.es.net
Date: Mon, 29 Mar 93 17:14:55 EST
X-Mailer: ELM [version 2.3 PL11]
Content-Length: 233
Status: RO
X-Lines: 6

I am seeing frequent drop outs (every 10-15 minutes) that last for a few
10-15 seconds (long enough to get a "Signal lost" in nv).  Has anyone else
been seeing similar behavior?  Any suggestions for tracking down the
problem?

Aydin

From rem-conf-request@es.net Tue Mar 30 06:31:23 1993
X-Mailer: InterCon TCP/Connect II 1.1
Date: Tue, 30 Mar 1993 09:15:28 -0500
From: Bob Stratton <strat@intercon.com>
To: rem-conf@es.net
Subject: video?
Content-Length: 98
Status: RO
X-Lines: 7

Is it just me, or are there no video sessions being
advertised right now? (0910 EST).

--Strat




From rem-conf-request@es.net Tue Mar 30 21:09:43 1993
Posted-Date: Tue 30 Mar 93 20:55:53-PST
Date: Tue 30 Mar 93 20:55:53-PST
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Overnight IETF playback experiment
To: rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@MMC.ISI.EDU>
Content-Length: 732
Status: RO
X-Lines: 16

Folks,
	An hour or so ago, we started the videotape delay replay of
today's IETF sessions.  We have two VCRs this time, and today we
attempted to always start and stop them at the same instant for each
of today's sessions, in the hope that when we used both cameras to
tape Dave Clark's evening BOF the two tapes would be at least
approximately synchronized.  This is an experiment.

So -- Anyone who will be watching about 7 or 8 hours from now, i.e.
about 12 noon GMT, please report back whether the tapes stayed in
sync well enough to be useful.  The Channel 1 audio and video will
be in sync from one tape, showing Dave, and the Channel 2 video
will show the slides (then later audience views).  Thanks.

						-- Steve
-------

From rem-conf-request@es.net Tue Mar 30 23:15:49 1993
Date: Wed, 31 Mar 93 09:22:00 +0300
From: Kauto Huopio <Kauto.Huopio@lut.fi>
To: rem-conf@es.net
Subject: Urgent request for SGI binaries
Content-Length: 160
Status: RO
X-Lines: 5

Where are the SGI binaries, one of my firends is to hold a demo in a
very short time and he'd need SGI binaries of VAT & etc..please
respond quickly..

--Kauto

From rem-conf-request@es.net Fri Mar 26 10:41:00 1993
To: mbone@isi.edu, rem-conf@es.net, vmtp-ip@gregorio.stanford.edu
Cc: Steve Deering <deering@parc.xerox.com>
From: "Louis A. Mamakos" <louie@NI.umd.edu>
Subject: Re: new IP Multicast release
Date: Fri, 26 Mar 1993 13:25:03 -0500
Sender: louie@sayshell.umd.edu
Content-Length: 711
Status: RO
X-Lines: 19


I was under the impression that this multicast conferencing
infrastructure was still experimental.  What better time to run an
experiment with some new technology than before an IETF?  There'll be
lots of people involved and much opportunity for feedback.

Not only that, but the new software should have less impact on the
"production" part of the Internet since there will be less use of
source routing, which as I understand, was the cause of some problems
in the past.

Also, "no one is twising your arm" and "what do you want for free"
comes to my mind.  As far as I remember, participation in this
experimental technology is still voluntary.

IMHO,

Louis A. Mamakos
University of Maryland, College Park

From rem-conf-request@es.net Fri Mar 26 09:01:42 1993
To: mathis@pele.psc.edu
Cc: mbone@isi.edu, rem-conf@es.net, vmtp-ip@gregorio.stanford.edu
Subject: Re: new IP Multicast release
Date: Fri, 26 Mar 1993 08:51:54 PST
Sender: Steve Deering <deering@parc.xerox.com>
From: Steve Deering <deering@parc.xerox.com>
Content-Length: 152
Status: RO
X-Lines: 7

> You are absolutely insane to be releasing routing software and kernels this
> close to the IETF...

Damned if we do and damned if we don't...

Steve


From rem-conf-request@es.net Fri Mar 26 02:12:55 1993
To: mbone@isi.edu, rem-conf@es.net, vmtp-ip@gregorio.stanford.edu
Cc: deering@parc.xerox.com
Subject: new IP Multicast release
Date: Fri, 26 Mar 1993 01:57:44 PST
Sender: Steve Deering <deering@parc.xerox.com>
From: Steve Deering <deering@parc.xerox.com>
Content-Length: 5395
Status: RO
X-Lines: 139

Thanks to Van Jacobson, we now have a version of the IP multicast routing
software that uses IP encapsulation for tunneling, rather than source
routing.  The new release for SunOS is in gregorio.stanford.edu:vmtp-ip/
and in parcftp.xerox.com:pub/net-research/ under the name:

		ipmulti-sunos41x.tar.Z

It replaces the previous file of that name, and also the mrouted.tar.Z
that was available on parcftp since November 23.  If you just want the
changes from the previous version, or if you want to update the IP multicast
sources for another BSD-based OS, fetch the following file instead:

		ipmulti-sunos41x-changes.tar.Z

We recommend that all MBone sites upgrade to this new version, and if you
can do it in the next couple of days before IETF, that would be great!
(If you can't, that's OK -- the new version can still do source-route
tunneling to talk to old versions).  HOWEVER, this new version has seen
very little testing, so be sure to save your current vmunix and mrouted
in case it doesn't work and we have to roll back to the previous version.

Here's the README file describing the changes in (a little) more detail:

-------------------
This is an upgrade to the IP Multicast software for SunOS 4.1.x, distributed
as ipmulti-sunos41x.tar.Z on Nov 13, 1992.  The source code herein should
also be of use for upgrading the IP Multicast code in BSD-derived systems
other than SunOS.  This upgrade consists of the following changes:

	(1) Support for tunneling via IP encapsulation (in addition
	    to -- and in preference to -- the source-route tunneling
	    supported in previous releases).  The changes are in the
	    kernel files ip_mroute.c and ip_mroute.h and in parts of the
	    routing demon, mrouted.  There is also a minor change to
	    netstat to print out one additional multicast routing
	    statistic (netstat -Ms).   This code was provided by
	    Van Jacobson of LBL.

	    The new mrouted should be run only on an upgraded kernel,
	    and the old mrouted should be run only on a non-upgraded
	    kernel.  With the new mrouted, tunnels use encapsulation
	    by default; to configure a source-routed tunnel, you must
	    add the keyword "srcrt" to the line for that tunnel in
	    /etc/mrouted.conf, for example:

		tunnel 1.2.3.4 5.6.7.8 srcrt metric 3 threshold 64

	(2) Changes to mrouted to allow its configuration to be queried
	    remotely, for topology debugging and mapping.  Some of the
	    changes were provided by Pavel Curtis of Xerox PARC, and
	    previously distributed as an mrouted-only release on
	    Nov 23, 1992.  Additional changes were provided by
	    Van Jacobson.

	(3) A new version of the kernel file in_pcb.c that allows incoming
	    multicast packets destined to the same UDP port to be
	    delivered to different processes, according to their
	    destination IP multicast addresses.  Provided by Van Jacobson.

	(4) A change of size of the kernel's audio buffer, from 1024 bytes
	    to 160 bytes, for compatibility with vat and vat-interoperable
	    audio programs.  (This doesn't really have anything to do with
	    IP multicast.)

This upgrade consists of the following files:

README_ENCAPS_UPGRADE	- this file

mrouted/*		- the sources for the mrouted demon and its
			  querying programs, plus sparc and sun3
			  binaries for those programs.

netstat			- sparc binary for upgraded netstat program.

sys.sunos411/netinet/in_pcb.c.diff		- kernel files
            /netinet/ip_mroute.c
            /netinet/ip_mroute.h

            /sbusdev/audio_79C30.h.diff

            /sun4c.OBJ/audio_79C30.o
            /sun4c.OBJ/in_pcb.o
            /sun4c.OBJ/ip_mroute.o

            /sun4.OBJ/in_pcb.o
            /sun4.OBJ/ip_mroute.o

            /sun3.OBJ/in_pcb.o
            /sun3.OBJ/ip_mroute.o

sys.sunos412/netinet/in_pcb.c.diff
            /netinet/ip_mroute.c
            /netinet/ip_mroute.h

            /sbusdev/audio_79C30.h.diff

            /sun4c.OBJ/audio_79C30.o
            /sun4c.OBJ/in_pcb.o
            /sun4c.OBJ/ip_mroute.o

            /sun4m.OBJ/ip_mroute.o
            /sun4m.OBJ/in_pcb.o
            /sun4m.OBJ/audio_79C30.o

sys.sunos413/netinet/in_pcb.c.diff
            /netinet/ip_mroute.c
            /netinet/ip_mroute.h

            /sbusdev/audio_79C30.c.diff
            /sbusdev/audio_79C30.h.diff
            /sbusdev/dbrivar.h.diff

            /sun4c.OBJ/audio_79C30.o
            /sun4c.OBJ/ip_mroute.o
            /sun4c.OBJ/in_pcb.o

            /sun4m.OBJ/audio_79C30.o
            /sun4m.OBJ/dbri_conf.o
            /sun4m.OBJ/dbri_mmcodec.o
            /sun4m.OBJ/in_pcb.o
            /sun4m.OBJ/ip_mroute.o

If you have sources for your kernel, install the source files and apply the
diffs (against the original Sun sources, not previous multicast sources) for
your SunOS version and architecture.  If you have a binary-only release,
install the .o files in place of the corresponding .o's from Sun.  Then
rebuild your vmunix.

WARNING: the only .o's that have been tested are those for 4.1.3 sun4c,
and even those have not undergone much testing.  Save your previous vmunix
and mrouted, and be prepared to revert to the previous stuff if this turns
out not to work.

Many thanks to Van and Pavel for their contributions, and to Steve Casner
for generating all of the .o's.  Please report bugs to mbone@isi.edu.

						Steve Deering
						Xerox PARC
						March 26, 1993

