From rem-conf-request@es.net Wed Jan 20 13:21:25 1993
From: Yee-Hsiang Chang <yhc@concert.net>
Subject: Re: conference services
To: sylvia@dcs.qmw.ac.uk
Date: Wed, 20 Jan 93 15:52:40 EST
Cc: rem-conf@es.net
X-Mailer: ELM [version 2.3 PL11]
Status: RO
Content-Length: 2247
X-Lines: 40

> 
> I've been wondering why it's only conferencing that people think about in 
> terms of support functions in the protocol layers.  There are likely to be 
> other applications of multimedia that may need a different kind of support 
> from that required by conferencing.  Some of the applications we've been 
> developing at QMW for a local multimedia  (analogue) network, use video for 
> non-conferencing modes of collaboration. E.g. Video 'bursts' (3-sec video 
> connection, either one-way or two-way alternately), also one-to-one open 
> channels.  Future applications are planned that will have other 
> characteristics, e.g. may use periodic short-duration video connections.  
> 
> By contrast, conferencing seems to assume that video channels will most likely 
> be long-lived, synchronised with audio, and create a faily constant demand for 
> channel bandwidth.  Conferencing can also stand a fairly long set-up time, 
> which a brief video stream couldn't. I think there's a danger that by 
> concentrating solely on the traditional conferencing-type applications, future 
> novel uses of video will not get the services they need. 
> I'd welcome comments on this.
>

Can you be more specific about the network protocol supports for your applications?
It seems to me that you are asking for a fast connection setup and tear down for
multimedia communications.  This has no conflict with the conference setup
requirements being discussed at the IETF remote conf BOF, which the conference
setup mechanisms should support a spectrum of the conferencing
applications ranging from complicated secure conferences to simple easy setup
types.  One example of this easy setup type is the "multicasting IETF meeting
to the Internet world".  Current setup only requires the participants to join 
the multicast group.

In terms of the network resource requirements such as the channel bandwidth, I
believe the short-lived messages still have to negotiate the network resource
before the communications.  Otherwise, you have to either pay for the 
non-interrupt service by reserving the bandwidth all the time, or suffer
statistical fluctation of the traffic on the network for a non-guaranted service.  

> Sylvia Wilbur  
> 

Yee-Hsiang Chang

From rem-conf-request@es.net Wed Jan 20 14:16:52 1993
Date: Wed, 20 Jan 93 16:51:40 EST
From: oj@roadrunner.pictel.com (Oliver Jones)
To: rem-conf@es.net
Subject: Re: conference services
Reply-To: oj@pictel.com
Organization: PictureTel Corporation
Phone: +1 508 977 8396
Fax: +1 508 532 6893
Status: RO
Content-Length: 836
X-Lines: 19

Fengmin Gong wrote:

   ...
   (1) many-to-many that fits "brainstorming" kind of conferencing style.
       Everyone is allow to talk freely (with due courtesy :-) and a constant
       video presence of all participants to all others may be necessary.
   ...
   Actually, these different services...raise different requirements for audio
   and video capability of end systems, e.g., audio mixing and handling of
   multiple video inputs.

The many-to-many mode is particularly demanding; if the system is to
support "due courtesy" low latency, echo suppression, and rapid
switching between video streams are very useful features.

Oliver Jones                 PictureTel Engineering
email:  oj@world.std.com     222 Rosewood Drive
tel:   +1(508)977-8396       Danvers, MA 01923-1393
video: (+1)700-561-9938&9939 fax: +1(508)532-6893

From rem-conf-request@es.net Wed Jan 20 14:38:32 1993
From: Fengmin Gong <gong@concert.net>
Subject: Re: conference services
To: sylvia@dcs.qmw.ac.uk
Date: Wed, 20 Jan 93 17:37:21 EST
Cc: rem-conf@es.net
X-Mailer: ELM [version 2.3 PL11]
Status: RO
Content-Length: 2800
X-Lines: 53

> 
> > 
> > I've been wondering why it's only conferencing that people think about in 
> > terms of support functions in the protocol layers.  There are likely to be 
> > other applications of multimedia that may need a different kind of support 
> > from that required by conferencing.  Some of the applications we've been 
> > developing at QMW for a local multimedia  (analogue) network, use video for 
> > non-conferencing modes of collaboration. E.g. Video 'bursts' (3-sec video 
> > connection, either one-way or two-way alternately), also one-to-one open 
> > channels.  Future applications are planned that will have other 
> > characteristics, e.g. may use periodic short-duration video connections.  
> > 
> > By contrast, conferencing seems to assume that video channels will most likely 
> > be long-lived, synchronised with audio, and create a faily constant demand for 
> > channel bandwidth.  Conferencing can also stand a fairly long set-up time, 
> > which a brief video stream couldn't. I think there's a danger that by 
> > concentrating solely on the traditional conferencing-type applications, future 
> > novel uses of video will not get the services they need. 
> > I'd welcome comments on this.
> >
> 
> Can you be more specific about the network protocol supports for your applications?
> It seems to me that you are asking for a fast connection setup and tear down for
> multimedia communications.  This has no conflict with the conference setup
> requirements being discussed at the IETF remote conf BOF, which the conference
> setup mechanisms should support a spectrum of the conferencing
> applications ranging from complicated secure conferences to simple easy setup
> types.  One example of this easy setup type is the "multicasting IETF meeting
> to the Internet world".  Current setup only requires the participants to join 
> the multicast group.
> 
> In terms of the network resource requirements such as the channel bandwidth, I
> believe the short-lived messages still have to negotiate the network resource
> before the communications.  Otherwise, you have to either pay for the 
> non-interrupt service by reserving the bandwidth all the time, or suffer
> statistical fluctation of the traffic on the network for a non-guaranted service.  
> 
> > Sylvia Wilbur  
> > 
> 
> Yee-Hsiang Chang
> 

I also agree that such applications don't seem to require anything
more than what a general conferencing application will.  In particular,
the mentioned application seems to require only video, as opposed
to audio and video, which essentially simplifies the matter of
audio/video synchronization.  Furthermore, the periodic video bursts
does present an opportunity for the resource management mechanism to
do a better job.

Fengmin Gong
MCNC Communications Research

From rem-conf-request@es.net Thu Jan 21 01:55:11 1993
To: sylvia@dcs.qmw.ac.uk
Cc: rem-conf@es.net, J.Crowcroft@cs.ucl.ac.uk
Subject: Re: conference services
Date: Thu, 21 Jan 93 08:57:48 +0000
From: Jon Crowcroft <J.Crowcroft@cs.ucl.ac.uk>
Status: RO
Content-Length: 846
X-Lines: 24



 >I've been wondering why it's only conferencing that people think about in 
 >terms of support functions in the protocol layers.  

well, you've got to start somewhere!

 >channel bandwidth.  Conferencing can also stand a fairly long set-up time, 
 >which a brief video stream couldn't. I think there's a danger that by 

i think people _are_ considering rapid floor contro - for instance
turning on and off filtering in routers in the multicast tree of a
net, rather than at source or sink...this means that burts of video
_are_ considered,

however what is a very short lived video session for?

if it requires high quality, and is gonna use a significant portion of
shared network bandwidth, then there is no way round the setup - if
its best effort, it could be treated like other bedst effort streams,
like unreliable user data...

 jon


From rem-conf-request@es.net Thu Jan 21 13:04:22 1993
To: Reseaux IP Europeens <ripe-list@ripe.net>
Subject: 14th RIPE Meeting Audiocast
Cc: mbone@isi.edu, rem-conf@es.net
From: Daniel Karrenberg <Daniel.Karrenberg@ripe.net>
X-Organization: RIPE Network Coordination Centre
X-Phone: +31 20 592 5065
Date: Thu, 21 Jan 93 19:38:59 +0100
Sender: Daniel.Karrenberg@ripe.net
Status: RO
Content-Length: 5150
X-Lines: 173



                RIPE Meeting Audiocast

RIPE plans to audiocast the plenary sessions of its upcoming meeting
held in Prague, Czech Republic on Monday-Wednesday of next week, ie. 
January 25th-27th.  Bandwidth to/from Prague is quite limited and no
network level redundancy exists.  We will use GSM encoding in order to
save bandwidth.  Tests done over the last few weeks indicate that this
works with quite good sound quality.  However, last minute problems can
always force us to cancel the audiocast altogether. 

Anyone wishing to ask questions during the question periods is strongly
advised to use GSM encoding as the conference site will not be able to
understand you otherwise.  Depending on network load questions over the
net may not be available at all. 

We will be using the multicast group IETF-1-LOW-AUDIO.MCAST.NET
(224.0.1.10) and base port 4100.  The audiocast will be advertised
conference using the LBL sd (session directory) program.  Only one
channel of audio will be available. 

For general information about the necessary software and the MBONE
multicast backbone fetch the file mbone/faq.txt available from
venera.isi.edu

The meeting schedule is appended below.  All times are central European
time which at that is 6 hours ahead of EST.  We are planning to
audiocast at least the following sessions. 


Monday January 25th	1400 - 1800  	Plenary Session

Tuesday January 26th	0900 - 1030	Routing Working Group

Wednesday January 27th	0900 - 1600  	Plenary Session
					including reports from the working
					groups on Tuesday
					





				   R I P E

			Agenda of the 14th RIPE meeting
			===============================

			 January 25 - January 27, 1993


######################### Monday 25 January 14:00 h ##########################

 1. Opening (R Blokzijl)					15 min
       o welcome
       o approval of the agenda
       o papers tabled
       o organisation of the meeting
       o dinner for Tuesday evening

 2. Welcome (Prof.J Hlavicka)
       Prof.Jan Hlavicka is the Dean of the Faculty of
       Electrical Engineering of the Czech Technical University

 3. Minutes of the last meeting					15 min
       o approval of the minutes
       o action list

 4. GIX - progress report (P.Lothberg)				15 min

 5. RIPE and the RARE Technical Program (R.Blokzijl)            45 min
       o RIPE representation in the RTC
       o joint projects:
         - route server implementation (T.Bates)
         - generic Internet service specification (T.Bates)
	 - IPv7 overview; European involvement (T.Dixon)
	 - SIP pilot (C.Huitema)
       
......................... Break 16:00 h  -  16:30 h ............................

 6. The introduction of CIDR and BGP4 in Europe			60 min
       (P.Lothberg, A.Others)

 7. Introduction to the demonstrations (M.Sterba)		30 min

 8. The Internet - your local radio station (C.Malamud)         30 min

......................... End of day 1 18:30 h .................................




######################### Tuesday 26 January 09:00 h ###########################

Session A:

   A1 Combined meeting of the Routing WG and the Database WG
      Chairs: Jean-Michel Jouanigot and Wilfried Woeber

......................... Break 10:30 h  -  11:00 h ............................

Session B:

   B1 Routing WG
      Chair: Jean-Michel Jouanigot

   B2 Database WG
      Chair: Wilfried Woeber

......................... Lunch 12:30 h  -  14:00 h ............................

Session C:

   C1 Local Registries WG
      Chair: Daniel Karrenberg

   C2 Mapping WG
      Chair: Daniele Bovio

   C3 DNS WG
      Chair: Francis Dupont

......................... Break 16:00 h  -  16:30 h ............................

Session D:

   D1 RAEC WG
      Chair: Glenn Kowack

   D2 Connectivity WG
      Chair: Milan Sterba

......................... End of day 2 18:00 h .................................


######################### Wednesday 27 January 09:00 h #########################

 9. RIPE NCC (D Karrenberg, P.V.Binst)				60 min
       o report
       o the future of the RIPE NCC:
	 - organisational position
	 - financing

10. Global Internet traffic measurement 			15 min
       o status of technical proposal (D.Karrenberg)
       o report on EBONE statistics project (W.v.d.Scheun)

......................... Break 10:30 h  -  11:00 h ............................

11. EBONE (P Jones, B Stockman)					60 min
       o status report
       o plans for EBONE 93

12. EMPB IP services (PTT Telecom NL)				30 min
       o EMPB is going to introduce IP on it's network, EMPB will be 
         connected to some of the RIPE coordinated networks. Issues related
         to interworking between EMPB/IP and the European Internet will be
         presented and discussed.
       o Practical experience (W.Porten)

......................... Lunch 12:30 h  -  14:00 h ............................

13. Reports from the working groups.				 90 min

14. Date, place and time of next meetings                        15 min
       o April 27 - 29, Amsterdam

15. A.O.B.							 15 min

16  Closing

......................... End of day 3 16:00 h .................................

From rem-conf-request@es.net Thu Jan 21 15:57:57 1993
To: Reseaux IP Europeens <ripe-list@ripe.net>, mbone@isi.edu, rem-conf@es.net
Subject: Wanted: GSM decoder SOURCE
From: Guido.van.Rossum@cwi.nl
X-Organization: CWI (Centrum voor Wiskunde en Informatica)
X-Address: P.O. Box 4079, 1009 AB Amsterdam, The Netherlands
X-Phone: +31 20 5924127 (work), +31 20 6225521 (home), +31 20 5924199 (fax)
Date: Fri, 22 Jan 1993 00:53:48 +0100
Sender: Guido.van.Rossum@cwi.nl
Status: RO
Content-Length: 740
X-Lines: 16

In order to listen to audio broadcasts like the one just announced by
Daniel Karrenberg I need *source* for a GSM decoder.  I understand
that VAT includes GSM but it seems to be available as a Sun SPARC
binary only, while I can only use SGI Indigo workstations.  I read the
mbone FAQ but it gives no further clue.

I would prefer a clean set of subroutines that do just the decoding
work (so I can program the network protocol and audio user interface
myself -- I can use code that I have for other encodings) but I'll
take any solution that I can get working in time to listen to the
Prague meeting.

Of course I wouldn't mind having a GSM *coder* so I can test my
decoder :-)

--Guido van Rossum, CWI, Amsterdam <Guido.van.Rossum@cwi.nl>

From rem-conf-request@es.net Thu Jan 21 17:23:44 1993
Posted-Date: Thu 21 Jan 93 17:13:25-PST
Date: Thu 21 Jan 93 17:13:25-PST
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Re: Wanted: GSM decoder SOURCE
To: Guido.van.Rossum@cwi.nl
Cc: ripe-list@ripe.net, MBONE@ISI.EDU, rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@CASNER.ISI.EDU>
Status: RO
Content-Length: 603
X-Lines: 15

>From the vat man page and the "CHANGES" file included in the vat.tar.Z
distribution from LBL:

Carsten Bormann (cabo@cs.tu-berlin.de) and Jutta Degener
(jutta@cs.tu-berlin.de) of the Communications and Operating Systems
Research Group (KBS) at the Technische Universitaet Berlin contributed
the GSM codec.

 - Added support for GSM coding (13Kb/s European Digital Cellular
   standard) using a very nicely written GSM libary done by Carsten
   Bormann and Jutta Degener of the Communications and Operating
   Systems Research Group (KBS) at the Technische Universitaet Berlin.

							-- Steve
-------

From rem-conf-request@es.net Fri Jan 22 02:14:01 1993
To: Reseaux IP Europeens <ripe-list@ripe.net>
Cc: mbone@isi.edu, rem-conf@es.net
Reply-To: ncc@ripe.net
Subject: Re: 14th RIPE Meeting Audiocast
From: Daniel Karrenberg <Daniel.Karrenberg@ripe.net>
X-Organization: RIPE Network Coordination Centre
X-Phone: +31 20 592 5065
Date: Fri, 22 Jan 93 09:12:57 +0100
Sender: Daniel.Karrenberg@ripe.net
Status: RO
Content-Length: 399
X-Lines: 10


Announcements are now being sent out once a minute on multicast group
IETF-1-LOW-AUDIO.MCAST.NET (224.0.1.10) and base port 4100.  We are
interested in "reception reports" especially from European sites to
<ncc@ripe.net>.  The announcements are being sent from Amsterdam.  We
will use this to keep you informed in case of networking difficulties
to/from Prague during the meeting itself. 


Daniel

From rem-conf-request@es.net Sun Jan 24 17:04:00 1993
Original-Received: Sun, 24 
                   Jan 93 16:25:16 PST from localhost.arc.nasa.gov by 
                   dscs.arc.nasa.gov (4.1/1.5T)
Pp-Warning: Illegal Received field on preceding line
To: sylvia@dcs.qmw.ac.uk
Cc: rem-conf@es.net
Subject: Re: conference services
Date: Sun, 24 Jan 93 16:25:16 -0800
From: Barry "M." Leiner <leiner@nsipo.nasa.gov>
Status: RO
Content-Length: 1737
X-Lines: 40


I think you have a good point. In fact, even during a long conference,
I can imagine wanting to be able to send multimedia objects in bursts
set up dynamically.

Barry

>>  -- using template mhl.format --
>> Date:    Wed, 20 Jan 93 17:42:21 GMT
>> To:      rem-conf@es.net
>> 
>> From:    sylvia@dcs.qmw.ac.uk
>> Subject: Re: conference services
>> 
>> Return-Path: rem-conf-request@es.net
>> Return-Path: <rem-conf-request@es.net>
>> X-Mailer: ELM [version 2.4 PL13]
>> Content-Type: text
>> Content-Length: 1160
>> 
>> I've been wondering why it's only conferencing that people think about in 
>> terms of support functions in the protocol layers.  There are likely to be 
>> other applications of multimedia that may need a different kind of support 
>> from that required by conferencing.  Some of the applications we've been 
>> developing at QMW for a local multimedia  (analogue) network, use video for 
>> non-conferencing modes of collaboration. E.g. Video 'bursts' (3-sec video 
>> connection, either one-way or two-way alternately), also one-to-one open 
>> channels.  Future applications are planned that will have other 
>> characteristics, e.g. may use periodic short-duration video connections.  
>> 
>> By contrast, conferencing seems to assume that video channels will most likely 
>> be long-lived, synchronised with audio, and create a faily constant demand for 
>> channel bandwidth.  Conferencing can also stand a fairly long set-up time, 
>> which a brief video stream couldn't. I think there's a danger that by 
>> concentrating solely on the traditional conferencing-type applications, future 
>> novel uses of video will not get the services they need. 
>> I'd welcome comments on this.
>> 
>> Sylvia Wilbur  


From rem-conf-request@es.net Wed Jan 27 01:54:12 1993
From: sylvia@dcs.qmw.ac.uk
Subject: More re conference services
To: rem-conf@es.net
Date: Wed, 27 Jan 1993 09:48:16 +0000 (GMT)
Cc: sylvia@dcs.qmw.ac.uk (Sylvia Wilbur;CB210)
X-Mailer: ELM [version 2.4 PL13]
Content-Type: text
Content-Length: 5691
Status: RO
X-Lines: 100

Thanks for all the responses I received on the topic of conferencing versus 
other kinds of applications.  I took a little time to read again the IETF 
drafts by Schulzrinne and Chang on AVT and multimedia comms architecture.

In summary, I take note that the AVT is intended to be extensible, and that 
it is addressing applications such as home entertainment as well as AV 
conferencing.    I would be interested to find out more about what is 
proposed for conference control, and achieving bandwidth efficiency, as 
these are areas where understanding of future multimedia applications 
will be important.  Is it proposed that conference control is just an optional 
service for some kinds of applications, leaving room for other application-
oriented services?  

Based on our experiences at QMW (admittedly rather limited so far), I 
would venture the following observations:
i)  Some uses of video may be of short, fixed duration, and occur 
dynamically or periodically.
ii)  Video may be used to focus on objects and scenes, perhaps periodically, 
without human users being actively involved
iii) In multimedia collaboration, audio/video streams will vary in the 
bandwidth they require according to the patterns of users interactions.  For 
example, when users are concentrating on their interactions with a shared 
applications, they move little, may not talk for several minutes or more, 
and scarcely glance at the video.  When their TdataburstU is over, users 
typically lean back, and start to use the AV for discussion about what they 
have just done.  Clearly, this kind of collaborative application will be 
characterised by video and audio bursts.  A clever application might even 
detect when users are busy with their workstations, and request low-
resolution video during these periods, to economise on bandwidth.
iv)  While AV over networks may be exciting for the research community, 
the wider user community is not so enthusiastic (in our experience).  If we 
want our systems to be used widely, we must take account of a broad range 
of applications and design principles.

For those with time to read it, hereUs a bit more discussion:

Even taking account of the goal of extensibility for AVT, there are already 
assumptions embedded in the AVT document which will not 
nececessarily be true for all multimedia applications and might affect AVT 
design, e.g. (p.30) R the lifetime of conferences is unknownS.  For some 
applications, the lifetime of a video stream may be known, and be 
constrained by a user-customisable parameter within the range of a few 
seconds (e.g. for a video TglanceU).

Chang also says Rcurrent setup only requires the participants to join the 
multicast groupS.   This could be interpreted as being based on a particular 
metaphor of multimedia multi-user activity, e.g. suitable for a  kind of 
cocktail-party model of conferencing or real-time versions of newsgroups, 
in which users browse current conferences and join the ones that appear 
interesting.    But for some multimedia applications, users may be 
participating passively rather than actively (i.e. a join operation is not 
appropriate), or, in other situations, video cameras may be focussing on an 
object, scene, piece of equipment, etc. without a human user being directly 
involved.    I appreciate that the design of an applications does not have to 
directly reflect the primitives supported at the Transport layer, but am 
concerned that applications should not be constrained by the proposed 
AVT services.

The problem with discussing multimedia applications at this point in 
time seems to be that there are so many different views on what they will 
be like, and what the issues are.  At QMW, we have installed multimedia 
on peopleUs desktops, in open-plan offices, and have experimented with 
getting non-researchers to use it for real tasks.   We are particularly keen 
to 
use it to improve the efficiency of setting-up ad-hoc meetings and 
consultations  - i.e. to locate and establish contact with busy people around 
the department.  

We do have a conferencing mode of collaboration, in which a variety of 
multi-user collaboration-aware  applications are available for people to  
use while using group AV.   All this stuff is based on our local LAN using 
X-windows and TCP-IP.   We mix video and audio streams, and set up 
dynamic connections via a local switch, which also has a connection to the 
University of London video-conferencing network.   This switch is 
controlled by a central server, and provides connection services to client 
applications.  During conference set-up, use of AV devices is negotiated 
among the nodes involved.  We do not, however, keep a central db of 
current conferences for browsing, having no requirement for this in our 
work environment.   Users are invited to participate in a session, and 
latecomers can be included.

We envisage that the system will also be useful for distance training, 
remote laboratory supervision of groups of collaborating students, remote 
consultation, etc., (our teaching lab is about a quarter of a mile from staff 
offices).    Remote lab supervision, for example, would need an application 
that connected to each  students workstations in turn for a short period, 
taking a look at their screens, and chatting to them about their progress 
over the AV.  

WeUve found some real uses for our system so far, some of them quite 
accidentally, and some of them involving remote objects and scenes, 
rather than users.    There seems no reason why these kinds of 
applications shouldnUt be useful in other, non-academic environments, 
and over wider networks.   

Sylvia Wilbur





From rem-conf-request@es.net Wed Jan 27 02:43:25 1993
To: rem-conf@es.net, rem-conf@es.net
Subject: DISTRIBUTED MULTIMEDIA SURVEY
From: Chris Adie <cja@castle.edinburgh.ac.uk>
Reply-To: C.J.Adie@edinburgh.ac.uk
Date: Wed, 27 Jan 93 10:30:57 WET
Content-Length: 1410
Status: RO
X-Lines: 38


                      SURVEY OF NETWORKED MULTIMEDIA
                      ------------------------------

The survey of distributed multimedia comissioned by RARE, for which
information was solicited recently, has been completed.  The final
survey report is called:

"A Survey of Distributed Multimedia Research, Standards and Products",
First Edition, 25 January 1993. 

The final report contains responses received up to 22 January 1993.  It
is over 150 pages long, and contains information on over 50 research
projects, 40 standards and 35 products. 

By agreement with the chair of the RARE Multimedia Working Group, the
survey report is now available by anonymous ftp.  It may be found on:

           ftp.ed.ac.uk         129.215.146.5

in directory

           pub/mmsurvey

in files:

 mmsurvey.doc         Word for Windows 2 document (binary)
 mmsurvey.doc.Z       Compressed Word for Windows 2 document (binary)
 mmsurvey.ps          Postscript of document (ascii)
 mmsurvey.ps.Z        Compressed postscript of document (binary)
 mmsurvey.txt         Text form of document (ascii)
 mmsurvey.txt.Z       Compressed text form of document (binary)

Chris Adie                                   Phone:  +44 31 650 3363
Edinburgh University Computing Service       Fax:    +44 31 662 4809
University Library, George Square            Email:  C.J.Adie@edinburgh.ac.uk
Edinburgh EH8 9LJ, United Kingdom


From rem-conf-request@es.net Wed Jan 27 09:39:49 1993
To: ivs-users@jerry.inria.fr
Cc: rem-conf@es.net
Subject: New version of IVS (2.1)
Date: Wed, 27 Jan 93 17:53:56 +0100
From: Thierry TURLETTI <Thierry.Turletti@sophia.inria.fr>
Content-Length: 3187
Status: RO
X-Lines: 84


Version 2.1 of IVS is now available from avahi.inria.fr in the file
/pub/videoconference/ivs.tar.Z.

This version includes a lot of changes & improvements. Here are the main
changes:

	* All Motif calls have been removed. Now, the Athena toolkit is used. 
	  A scrollbar is used to display the list of participants. (The
          previous display was not very fast...) This list of participants
          begins first with your local station, the active stations and then,
          the passive stations.

	* A rate control option for video data is added. Two possibility of
	  control are available:

	    - Privilege Quality - In this mode, the frame rate is reduced in
	                          order not to overrun the maximum rate
				  selected. Useful for still images...

	    - Privilege frame rate - In this mode, no delay is added. The
				  control rate is done using the quantizer and
                                  the movement detection's threshold. These
                                  parameters are dynamically chosen to fit in
                                  the bandwidth selected.


	* IVSD (IVS daemon) added. Running it in background allows
          you to be called by anyone on the Internet. A small icon will
          appear and a message will pop up when an ivs talk is requested. If
          you click on the "Accept" option, an ivs command will be run
          automatically towards your party. For example, if you want to call
          asterix@obelix.fr, just run ivs obelix.fr
          Then, if you click on the "Call up" button, the ivs request is
          sent to the obelix.fr host. If you click on the icon, you will be
          able to initiate an IVS session in unicast or multicast mode.

	* A new button allows to freeze the image.

	* Now, local display selection, audio encoding mode and VideoPix port 
	  input may be changed during encoding.

	* The audio driver is now automatically closed when there is no audio
	  data encoding/decoding.

        * The video bandwidth and the frame rate are now displayed at the
          encoder side.

	* Speed improvements

	* Port numbers are now differents in unicast mode. In this way an
   	  unicast call may be done during a multicast conference.

	* Video and audio decoding options are now implicit except for
	  your local station. To disable this mode, use the -a|-v options
 	  (see manual).

	* IVS is now supporting :

		 # SPARCSTATIONS (audio + video)
		   --> VideoPix framegrabber
		   --> Parallax framegrabber [Thanks to Edgar Ostrowski and
                                                        Frank Ruge]

		 # SGI (audio + video) [Thanks to Guido van Rossum]
    
		 # HP (audio + video)
		   --> Raster Rops framegrabber [Thanks to Edgar Ostrowski, 
					       Frank Ruge and Markus Rebensburg]

		 # DECSTATIONS (only video decoding ...) 


Note that in the same ftp-site, a report ivs_report.ps.Z describes the
previous version.

Hope that you will appreciate this new version,


  Thierry Turletti.                       

-------------------------------------------------
RODEO Project.  INRIA Sophia-Antipolis - FRANCE -
e-mail: turletti@sophia.inria.fr

From rem-conf-request@es.net Wed Jan 27 09:47:00 1993
Date: Wed, 27 Jan 93 12:09:40 EST
From: Jack.L.DiGiuseppe@um.cc.umich.edu
To: rem-conf@es.net
X-Mts-Userid: W4AA
Subject: Request for Information
Content-Length: 1369
Status: RO
X-Lines: 29

We are working on plans for workstation videoconferencing R&D for a variety
networking environments: in-building analog video over twisted pair, campus
FDDI backbone, switched digital using codecs, and internet packet video.
 
We are looking for specific vendors, vendor contacts, models, and costs for
equipment that might be used to configure workstations and build or
interface to networks in order to create the required R&D environment.
 
Components that we are looking from information and recommendations on
include (vendors I already have information for are in parentheses):
 
    Workstation video cameras
    Workstation microphones
    Workstation frame grabber boards for PC, Mac, Sun
    Workstation codecs for PC, Mac, Sun
    Coax-to-twisted pair analog video transmitters & receivers
      (Datapoint, Lightwave)
    Analog video+audio crosspoint switches (Datapoint, Lightwave)
    Crosspoint switch-to-switched digital codec interfaces
    Workstation FDDI interfaces for PC, Mac, Sun
 
We are in good shape in the area of ethernet and internet connectivity
products, and in the area of codecs, inverse muxes, and switched digital 
services.  But we would welcome suggestions on other hardware or software
that might be important and impact our plans.
 
Any assistance would be greatly appreciated.
 
Jack L. DiGiuseppe, jld@merit.edu, (313)665-0380

From rem-conf-request@es.net Wed Jan 27 12:56:54 1993
Date: Wed, 27 Jan 93 12:40:24 -0800
From: arc@xingping.esd.sgi.com (Andrew Cherenson)
To: ivs-users@jerry.inria.fr,
        "Thierry.Turletti" <Thierry.Turletti@sophia.inria.fr>
Subject: SGI patches for New version of IVS (2.1)
Cc: rem-conf@es.net
Content-Length: 2733
Status: RO
X-Lines: 93

The IVS 2.1 release has a bug in the SGI Indigo video code.
Here are the diffs needed to compile and run ivs and ivsd on the Indigo:

===================================================================
RCS file: RCS/ivs.c,v
retrieving revision 1.1
diff -c2 -r1.1 ivs.c
*** /tmp/,RCSt1a01785	Wed Jan 27 12:37:50 1993
--- ivs.c	Wed Jan 27 10:01:14 1993
***************
*** 956,960 ****
        fprintf(stderr, "cannot open login file\n");
      }else{
!       a_date = time();
        strcpy(date, ctime(&a_date)); 
        fprintf(F_log, "\nEnd of conference : %s\n", date);
--- 956,960 ----
        fprintf(stderr, "cannot open login file\n");
      }else{
!       a_date = time(0);
        strcpy(date, ctime(&a_date)); 
        fprintf(F_log, "\nEnd of conference : %s\n", date);
***************
*** 2409,2413 ****
        fprintf(stderr, "cannot create login file\n");
      }else{
!       a_date = time();
        strcpy(date, ctime(&a_date)); 
        fprintf(F_log, "\t\t *** IVS Participants list ***\n\n");
--- 2409,2413 ----
        fprintf(stderr, "cannot create login file\n");
      }else{
!       a_date = time(0);
        strcpy(date, ctime(&a_date)); 
        fprintf(F_log, "\t\t *** IVS Participants list ***\n\n");
===================================================================
RCS file: RCS/ivsd.c,v
retrieving revision 1.1
diff -c2 -r1.1 ivsd.c
*** /tmp/,RCSt1a01785	Wed Jan 27 12:37:51 1993
--- ivsd.c	Wed Jan 27 12:01:01 1993
***************
*** 49,52 ****
--- 49,55 ----
  #include <X11/Xaw/AsciiText.h>
  
+ #ifdef __sgi
+ #define vfork fork
+ #endif
  
  #include "BDaemon.bm"
===================================================================
RCS file: RCS/videosgi.c,v
retrieving revision 1.1
diff -c2 -r1.1 videosgi.c
*** /tmp/,RCSt1a01785	Wed Jan 27 12:37:51 1993
--- videosgi.c	Wed Jan 27 12:05:31 1993
***************
*** 134,139 ****
  	xoff = (w-col)/2;
  	yoff = (h-lig)/2;
! 	buf_even = &ptr[(yoff/2)*w + xoff];
! 	buf_odd = &ptr[(yoff/2 + h/2)*w + xoff];
  
  	for (i = 0; i < lig; i++) {
--- 134,139 ----
  	xoff = (w-col)/2;
  	yoff = (h-lig)/2;
! 	buf_odd = &ptr[(yoff/2)*w + xoff];
! 	buf_even = &ptr[(yoff/2 + h/2)*w + xoff];
  
  	for (i = 0; i < lig; i++) {
===================================================================
RCS file: sgi/RCS/Makefile,v
retrieving revision 1.1
diff -c2 -r1.1 sgi/Makefile
*** /tmp/,RCSt1a01798	Wed Jan 27 12:38:30 1993
--- sgi/Makefile	Wed Jan 27 09:59:47 1993
***************
*** 52,56 ****
  	cc -O -c $(SOURCES)video_coder.c
  
! sgi_grabbing.o: $(SOURCES)videosgi.c 
  	cc $(CFLAGS) -c $(SOURCES)videosgi.c
  
--- 52,56 ----
  	cc -O -c $(SOURCES)video_coder.c
  
! videosgi.o: $(SOURCES)videosgi.c 
  	cc $(CFLAGS) -c $(SOURCES)videosgi.c
  



From rem-conf-request@es.net Thu Jan 28 01:38:28 1993
To: arc@xingping.esd.sgi.com (Andrew Cherenson)
Cc: rem-conf@es.net
Subject: Re: SGI patches for New version of IVS (2.1)
Reply-To: Thierry.Turletti@sophia.inria.fr
Date: Thu, 28 Jan 93 09:32:47 +0100
From: Thierry TURLETTI <Thierry.Turletti@sophia.inria.fr>
Status: RO
Content-Length: 76
X-Lines: 4


Thank's for your patches. I included them in the ivs tar file.

  Thierry.

From rem-conf-request@es.net Thu Jan 28 10:25:30 1993
Date: Thu, 28 Jan 1993 10:19:05 -0500 (EST)
From: Jeff Young <young@alw.nih.gov>
To: rem-conf@es.net
Subject: CU-SeeMe
Status: RO
Content-Length: 164
X-Lines: 7


I've tried poking around to find this package (CU-SeeMe)
at Cornell but haven't had any luck, could anyone help
me out?
___________	
	jy
	young@heart.dcrt.nih.gov

From rem-conf-request@es.net Thu Jan 28 12:52:01 1993
From: Pushpendra Mohta <pushp@CERF.NET>
Subject: CERFnet Seminar: MBONE - the Multicast Backbone
To: mbone@isi.edu, rem-conf@es.net
Date: Thu, 28 Jan 93 12:25:23 PST
Cc: pushp@CERF.NET (Pushpendra Mohta)
X-Usmail: CERFnet, P.O. BOX 85608, San Diego, CA 92186-9784
X-Mailer: ELM [version 2.3 PL11]
Status: RO
Content-Length: 4971
X-Lines: 141

 CERFnet presents...
 
 Technology Update Seminar: 
 
 MBONE - the Multicast Backbone  
 	Videoconferencing Over the Internet       
 
 
 March 3, 1993
 9:00 a.m. to 4:00 p.m.
 San Diego Supercomputer Center
 San Diego, California
 
 
 Stay current with the latest Internet technologies - learn about the 
 MBONE - the multicast backbone now being used for experimental 
 videoconferencing over the Internet. This one day seminar, featuring 
 one of the architects of the MBONE, Steve Deering of Xerox PARC, will 
 tell you everything you need to know to understand what it is and 
 where it will lead. In addition, learn how you can become a part of 
 the MBONE project by joining CERFnet's MBONE Testbed.
 
 
 What is the MBONE?
 
 The MBONE is a virtual network and allows videoconferencing to the 
 desktop.  It is layered on top of portions of the physical Internet to 
 support routing of IP multicast packets. The MBONE is an outgrowth 
 of the first IETF "audiocast" experiments in which live audio and 
 video were multicast from the IETF meeting site to destinations 
 around the world. 
     
 
 Why is it important?
 
 The MBONE is the next step forward in the internetworking 
 environment. This is the leading edge of network engineering. It is 
 the basis for the applications of the near future. By participting in 
 the CERFnet MBONE Testbed you will be among the first network 
 sites to have multicast audio and video directly to your desktop. 
 
 Attend the seminar to learn more about the MBONE, and for 
 information on what you'll need  to participate in the CERFnet 
 MBONE Testbed.
 
 
 Agenda:
 
 *Steve Deering, a member of the research staff at Xerox PARC, will 
 discuss 
 
 What it is: multicasting, tunnelling, the virtual topology, etc.
 
 How it works: Protocols: DVMRP - the distance vector multicast routing
 protocol; MOSPF - the IP multicast extension to the OSPF routing protocol;
 hardware; configuration; and administration. 
 
 The results of previous experiences: IETF, ISOC broadcasts
 
 Applications: audio and video 
 
 
 *Pushpendra Mohta, Director of Engineering for CERFnet, will discuss
 
 CERFnet MBONE Testbed specifics: hardware, software, configuration, what
 you will need to participate
 
 
 Steve Deering:
 
 Stephen Deering received his B.Sc. and M.Sc.  from the University of 
 British Columbia, and his PhD from Stanford University. He has been 
 studying, designing and implementing computer communication 
 protocols since 1978, including work on X.25 software for connecting 
 to public networks, on the first implementation of the X.400 protocol 
 suite for e-mail, and on high-performance transport protocols for 
 distributed systems.  For his doctoral dissertation, he developed 
 several new routing protocols for efficient and scalable internetwork 
 multicasting.  Dr. Deering is currently a member of the research staff 
 at Xerox PARC, where he is continuing his work on multicast routing 
 and applications, and investigating new communication architectures 
 and services.  He is an active member of the Internet End-to-End 
 Research Group and of the Internet Engineering Task Force.
 
 Pushpendra Mohta
 
 Pushpendra Mohta is the Director of Engineering of the California
 Education and Research Federation Network (CERFnet) where he architects
 new network services and analyses routing and traffic flows. He has over
 three years experience in network engineering and operations and is well
 versed in current and emerging networking hardware and software
 technology. He has supported a large user bas of varying familiarity with
 the Internet and taught seminars on network management. 
 Pushpendra holds a Masters Degree in Communication Systems from the
 University of California, San Diego.


 Registration information:

 "MBONE - the multicast backbone" is $79.00 for CERFnet members, and
 $169.00 for all others with registrations received by February 16,
 1993.  Registrations sent after that date should include $99.00 and
 $199.00, respectively. The price includes course materials, lunch, and
 refreshments.

 Checks should be made payable to General Atomics and sent to Barbara
 Wicklein-Massey at the address below. Or you can register by phone
 using MasterCard or VISA by calling (800) 876-CERF.

 To register by electronic mail send a message to Barbara Wicklein-
 Massey with the following information: name, institution/company,
 mailing address, e-mail address, date we can expect payment, and
 telephone number.

 Hotel information and directions will be send with your registration.
 If you have further questions, please contact Barbara by e-mail or
 phone.

 Barbara Wicklein-Massey wicklein@cerf.net (619) 455-3903, or (800)
 876-CERF

 CERFnet, Bldg 9-207K P.O. Box 85608 San Diego, CA 92186-9784













--pushpendra

Pushpendra Mohta              pushp@cerf.net        +1 619 455 3908
Director of Engineering       pushp@sdsc.bitnet     +1 800 876 2373
CERFNet


From rem-conf-request@es.net Thu Jan 28 14:13:18 1993
To: Jeff Young <young@alw.nih.gov>
Cc: rem-conf@es.net
From: Scott_Brim@cornell.edu
Subject: Re: CU-SeeMe
Date: Thu, 28 Jan 1993 16:56:46 -0500
Sender: swb@nr-tech.cit.cornell.edu
Status: RO
Content-Length: 389
X-Lines: 15

gated.cornell.edu: pub/video.  Get everything, have fun.  ...Scott

  >Date: Thu, 28 Jan 1993 10:19:05 -0500 (EST)
  >From: Jeff Young <young@alw.nih.gov>
  >To: rem-conf@es.net
  >Subject: CU-SeeMe
  >
  >
  >I've tried poking around to find this package (CU-SeeMe)
  >at Cornell but haven't had any luck, could anyone help
  >me out?
  >___________	
  >	jy
  >	young@heart.dcrt.nih.gov


From rem-conf-request@es.net Wed Jan 13 15:29:37 1993
To: Stephen Casner <CASNER@ISI.EDU>, rem-conf@es.net
Subject: Re: home office IANA?
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Wed, 13 Jan 1993 18:17:28 +22311841
From: Valdis Kletnieks <valdis@black-ice.cc.vt.edu>
Content-Length: 1627
Status: RO
X-Lines: 38

On Mon, 11 Jan 1993 01:44:03 EST, you said:
> Jon,
>         The allocation of frequencies implies an allocation of
> bandwidth.  This is not the case for allocation of multicast
> addresses.  However, your point is well taken: there may be a role for
> administrative coordination of bandwidth for multicast conferences in
> the short term because we are beginning to see a number of special
> events as well as ongoing services that might want to send multicast
> information.  Since there is no mechanism deployed yet for traffic
> control (that's the long term), this coordination likely will depend
> upon the cooperation of the participants.

Steve:

A thought experiment: What if the next IETF plenary
and Radio Free Internet both grabbed 224.0.3.5, port 1091 to 
broadcast on?  The results would be similar to having 2 FM radio
stations both talking on 96.3MZ - you then have to use "network
distance" (i.e. TTL) to seperate them, similar to how the FCC in the
US (and corresponding bodies elsewhere) allocate radio stations.

There's *two* issues here:

1) Reserving the *address* so people's bits don't stomp on one
another.

2) Reserving the *bandwidth* so all the bits can get through
(especially in those corners of the world where network technology
is dominated by things slower than T1/T3/FDDI).

Unfortunately, I don't see one process as solving both problems -
the 'bandwidth reservation' one is something that probably needs
to be solved for TCP/IP in general.  I'll let the heavyweights on
the IP V7 mailing list address that issue.. ;)

				Valdis Kletnieks
				Computer Systems Engineer
				Virginia Tech

From rem-conf-request@es.net Wed Jan 13 16:21:28 1993
From: bill@wizard.gsfc.nasa.gov (Bill Fink)
Subject: NASA Select TV STS-54 Audio Feed
To: rem-conf@es.net
Date: Wed, 13 Jan 1993 19:19:08 -0500 (EST)
X-Mailer: ELM [version 2.4 PL17]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 9538
Status: RO
X-Lines: 287

Hi,

We (the Network Support Group at Goddard Space Flight Center) are
audiocasting an audio feed from NASA Select TV which is actually
originated by Johnson Space Center.  We will try and be very careful
not to conflict with other scheduled activities on the MBONE such as
the SIP IETF Working Group Meeting.  However, if this activity interferes
in any significant way with the health of the MBONE or the Internet,
just send a message to mbone@gsfc.nasa.gov and we will cease and desist
immediately.

I have enclosed a copy of the current schedule of planned activities
for the current Space Shuttle mission that are to be broadcast over
NASA Select TV.

In the near future, we also hope to do a video feed.

						-Thanks

						-Bill Fink
						 NASA GSFC



    ***********************************************************************

                              NASA SELECT TV SCHEDULE
                                 STS-54 TDRS/IUS/DXS
                                       Rev A
                                  January 13, 1993

    ***********************************************************************

    NASA Select programming can be accessed through GE Satcom F2R,
    transponder 13.  The frequency is 3960 MHz with an orbital position
    of 72 degrees west longitude.  This is a full transponder service
    and will be operational 24 hours a day.

    Two hour edited programs of each flight day will be replayed for Hawaii
    and Alaska on Galaxy 6, transponder 19, channel 19.  The orbital
    position is 99 degrees west longitude, with a frequency of 4080 MHz.
    Audio is 6.2 and 6.8 MHz.  The programs will begin on launch day and
    continue through landing, airing at 11 pm Central time.

    This NASA Select Television schedule of mission coverage is available
    on Comstore, the mission TV schedule computer bulletin board service.
    Call 713-483-5817, and follow the prompts to access this service.




    ----------------------- Wednesday, January 13 -------------------------
                                     FD1


    ORBIT                SUBJECT              SITE       MET       CST


       3   NASA SELECT ORIGINATION            GDS     00/02:54    10:53 AM
          SWITCHED TO GOLDSTONE

       3   GROUNDSTATION VALIDATION CHECK/    GDS     00/02:54    10:53 AM
           PAYLOAD BAY VIEWS OF
           TDRS PRE-DEPLOY CHECKOUT
           T=16:00

       3   NASA SELECT ORIGINATION            JSC     00/03:10    11:09 AM
           SWITCHED TO JSC

       3   TDRS PRE-DEPLOY CHECKOUT           MIL     00/03:13    11:12 AM
           T=5:00

       5   TDRS/IUS DEPLOY                            00/06:13    02:12 PM
           (NOT TELEVISED LIVE)

       5   NASA SELECT ORIGINATION            KSC     00/06:46    02:45 PM
           SWITCHED TO KSC

       5   LAUNCH ENGINEERING REPLAYS         KSC     00/06:46    02:45 PM
           T=30:00

       6   Ku BAND ANTENNA DEPLOY                     00/07:10    03:09 PM
           (NOT TELEVISED)

       5   NASA SELECT ORIGINATION            JSC     00/07:16    03:15 PM
           SWITCHED TO JSC

       6   VTR PLAYBACK OF TDRS DEPLOY        TDRW    00/08:00    03:59 PM
           T=10:00

       8   CREW SLEEP                                 00/10:30    06:29 PM

       8   REPLAY OF FD1 ACTIVITIES           JSC     00/11:01    07:00 PM




    ----------------------- Thursday, January 14 --------------------------
                                       FD2

      13   CREW WAKEUP                                00/18:30    02:29 AM

      18   P/TV04 FLIGHT DECK ACTIVITIES      TDRW/E  01/01:35    09:34 AM
           T=40:00

      18   P/TV05 MIDDECK ACTIVITIES          TDRW/E  01/02:40    10:39 AM
           T=35:00

      21   MISSION STATUS BRIEFING            JSC     01/06:01    02:00 PM

      23   CREW SLEEP                                 01/09:30    05:29 PM

      26   REPLAY OF FD 2 ACTIVITIES          JSC     01/11:01    07:00 PM



    -------------------------- Friday, January 15 -------------------------
                                       FD3

      28   CREW WAKEUP                                01/17:30    01:29 AM

      31   P/TV05 MIDDECK ACTVTIES           TDRE    01/21:35    05:34 AM
           T=16:00

      31   WOR RADIO INTERVIEW WITH CREW              01/22:15    06:14 AM
           T=20:00

      32   CONUS INTERVIEW WITH CREW          TDRW    01/23:40    07:39 AM
           T=20:00



      34   DSO 802 - PHYSICS OF TOYS          TDRW    02/02:53    10:52 AM
           AUDIO/VIDEO CHECKOUT
           T=15:00

      35   DSO 802 - PHYSICS OF TOYS          TDRE    02/03:23    11:22 AM
           EDUCATIONAL PROGRAM WITH           JSC
           BRONX, NE YORK
           WILLOUGHBY, OHIO
           PORTLAND, OREGON
           FLINT, MICHIGAN
           T=40:00

      36   P/TV05 MIDDECK ACTIVITIES          TDRE    02/04:55    12:54 PM
           T=20:00

      37   MISSION STATUS BRIEFING            JSC     02/06:01    02:00 PM

      38   CREW SLEEP                                 02/08:30    04:29 PM

      40   REPLAY OF FD 3 ACTIVITIES          JSC     02/11:01    07:00 PM




    ----------------------- Saturday, January 16 --------------------------
                                      FD4

      43   CREW WAKEUP                                02/16:30    12:29 AM

      46   P/TV05 MIDDECK ACTIVITIES          TDRW    02/20:30    04:29 AM
           T=50:00

      52   MISSION STATUS BRIEFING            JSC     03/06:01    02:00 PM

      53   CREW SLEEP                                 03/07:30    03:29 PM

      56   REPLAY OF FD 4 ACTIVITIES          JSC     03/11:01    07:00 PM

      59   CREW WAKEUP                                03/15:30    11:29 PM



    ------------------------ Sunday, January 17 ---------------------------
                                      FD5


      60   EVA PREP                           TDRE    03/18:00    01:59 AM
           T=35:00

      61   EMU CHECK                          TDRW    03/19:02    03:01 AM
           T=20:00

      61   EMU PRE-BREATHE AND AIRLOCK        TDRW/E  03/19:30    03:29 AM
           DEPRESS  T=45:00

      62   AIRLOCK EGRESS                     TDRE    03/20:15    04:14 AM
           T=15:00

      62   EVA                                TDRW    03/20:40    04:39 AM
           T=15:00

      63   EVA                                TDRE    03/21:15    05:14 AM
           T=55:00

      64   EVA                                TDRW    03/22:10    06:09 AM
           T=12:00

      64   EVA                                TDRW/E  03/22:40    06:39 AM
           T=27:00

      64   EVA                                TDRE    03/23:15    07:14 AM
           T=30:00

      64   EVA                                TDRW    03/23:50    07:49 AM
           T=17:00

      65   EVA                                TDRW    04/00:15    08:14 AM
           T=30:00

      65   AIRLOCK INGRESS                    TDRE    04/00:45    08:44 AM
           T=10:00

      68   MISSION STATUS BRIEFING            JSC     04/06:01    02:00 PM

      69   CREW SLEEP                                 04/07:00    02:59 PM

      73   REPLAY OF FD 5 ACTIVITIES          JSC     04/11:01    07:00 PM

      74   CREW WAKEUP                                04/15:00    10:59 PM



    ----------------------- Monday, January 18 ----------------------------
                                     FD6

      77   P/TV08 CGBA ACTIVITIES             TDRE    04/18:20    02:19 AM
           T=23:00

      77   P/TV08 CGBA ACTIVITIES             TDRW    04/19:10    03:09 AM
           T=17:00

      83   Ku BAND STOW                               05/03:25    11:24 AM
           (NOT TELEVISED)

      84   MISSION STATUS BRIEFING            JSC     05/06:01    02:00 PM

      85   CREW SLEEP                                 05/07:00    02:59 PM

      88   REPLAY OF FD 6 ACTIVITIES          JSC     05/11:01    07:00 PM

      90   CREW WAKEUP                                05/15:00    10:59 PM


    ------------------------- Tuesday, January 19 -------------------------
                                     FD7

      96   DE-ORBIT BURN                              06/22:30    06:29 AM
           (NOT TELEVISED)

      97   LANDING AT KSC                     KSC     05/23:32    07:31 AM

           LANDING REPLAYS                    KSC                 07:44 AM

           POST LANDING PRESS CONFERENCE      KSC        TBD       TBD







    ***********************************************************************
                                Definition of Terms
    ***********************************************************************


    CGBA:  Commercial Generic Bioprocessing Apparatus
    CHROMEX:Chromosome and Plant Cell Division in Space Experiment
    CST:   Central Standard Time
    EVA:   Extra Vehicular Activity
    FD:    Flight Day
    IUS:   Inertial Upper Stage
    JSC:   Johnson Space Center
    KSC:   Kennedy Space Center
    Ku:    Ku Band Communications Antenna
    MECO:  Main Engine Cut Off
    MET:   Mission Elapsed Time: day/hour/minute
    PARE:  Physilogical and Anatomical Rodent Experiment
    SSCE:  Solid Surface Combustion Experiment
    STS:   Shuttle Transportation System
    T=:    Total Time of TV Downlink
    TDRE:  Tracking And Data Relay Satellite, East Longitude
    TDRS:  Tracking And Data Relay Satellite Payload
    TDRW:  Tracking And Data Relay Satellite, West Longitude
    VTR:   Video Tape Recorder

From rem-conf-request@es.net Wed Jan 13 21:38:40 1993
Posted-Date: Wed 13 Jan 93 20:36:54-PST
Date: Wed 13 Jan 93 20:36:54-PST
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Re: home office IANA?
To: valdis@black-ice.cc.vt.edu, CASNER@ISI.EDU, rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@CASNER.ISI.EDU>
Status: RO
Content-Length: 521
X-Lines: 11

Valdis,
	You are right, there are two resouces (addresses and bandwidth),
and different mechanisms are required for the two.  I don't see the
allocation of addresses as a problem in the short term -- there are
plenty to choose from, and random allocation at the current density
should give a suitably low probability of collision.  In the long term,
it may be feasible to have a hierarchical allocation mechanism for
addresses (some of which may fall out of a hierarchical addressing
structure?).
							-- Steve
-------

From rem-conf-request@es.net Thu Jan 14 08:24:45 1993
Date: Thu, 14 Jan 93 11:16:47 -0500
From: Yee-Hsiang Chang <yhc@concert.net>
To: rem-conf@es.net
Subject: conference services
Cc: yhc@concert.net
Status: RO
Content-Length: 893
X-Lines: 30

Hi,

I am trying to compile a list of the conference services (from the 
user/application viewpoint) that should be supported by the functions in
the protocol layers.  I have come out with the following and want to have 
the comments from the list.

Basic services

o Point-to-point communication service.
o Multipoint communication service.

Other services

o Conference scheduling service.
o Conference announcement service.
o Information discovery service including conference, participant/site 
  names, and site address discovery.
o Security service.
o Conference cost/charging information service.

I feel most of the service definitions today focus on the basic services.
For example, how is the QOS of a point-to-point service or a multipoint 
service.  However, other services are also very important.

Thanks,

Yee-Hsiang Chang
Communications Research
MCNC Center for Communications

From rem-conf-request@es.net Thu Jan 14 10:08:51 1993
Date: Thu, 14 Jan 93 09:57:26 PST
From: vinay@eitech.com (Vinay Kumar)
To: rem-conf@es.net
Subject: Re: conference services
Content-Length: 1462
Status: RO
X-Lines: 49

I have two suggestions in the category of "other services":

1. Conference Retrieval and Archival Services.
2. Conference Browsing Services.

Vinay
Enterprise Integration Technologies
Palo Alto, CA 94301

---------------------------------------------------------------
> From rem-conf-request@es.net Thu Jan 14 08:50:23 1993
> Date: Thu, 14 Jan 93 11:16:47 -0500
> From: Yee-Hsiang Chang <yhc@concert.net>
> To: rem-conf@es.net
> Subject: conference services
> Cc: yhc@concert.net
> Content-Length: 893
> 
> Hi,
> 
> I am trying to compile a list of the conference services (from the 
> user/application viewpoint) that should be supported by the functions in
> the protocol layers.  I have come out with the following and want to have 
> the comments from the list.
> 
> Basic services
> 
> o Point-to-point communication service.
> o Multipoint communication service.
> 
> Other services
> 
> o Conference scheduling service.
> o Conference announcement service.
> o Information discovery service including conference, participant/site 
>   names, and site address discovery.
> o Security service.
> o Conference cost/charging information service.
> 
> I feel most of the service definitions today focus on the basic services.
> For example, how is the QOS of a point-to-point service or a multipoint 
> service.  However, other services are also very important.
> 
> Thanks,
> 
> Yee-Hsiang Chang
> Communications Research
> MCNC Center for Communications
> 

From rem-conf-request@es.net Thu Jan 14 12:45:54 1993
Date: Thu, 14 Jan 1993 21:39:48 +0100
To: rem-conf@es.net, Yee-Hsiang Chang <yhc@concert.net>
From: hans@sics.se (Hans Eriksson)
X-Sender: hans@sics.se (Unverified)
Subject: Re: conference services
Cc: yhc@concert.net
Content-Length: 943
Status: RO
X-Lines: 25

At 11.16 93-01-14 -0500, Yee-Hsiang Chang wrote:
>o Point-to-point communication service.
>o Multipoint communication service.

Maybe "multi-point" should be split into one-to-many and many-to-many, ie
in a conference everybody is "listening" to only one person, the current
speaker, (ont-to-many) or everybody listens/sees every other participant
(many-to-many).

>I feel most of the service definitions today focus on the basic services.
>For example, how is the QOS of a point-to-point service or a multipoint 
>service.  However, other services are also very important.

For me, these "higher" services do not feel mature enough to be defined. We
need to cover some ground and agree on basic concepts and terminology. But,
I agree, we$d better get moving even here which we already are to some
extent.

cheers

/hans

Hans Eriksson, SICS, Box 1263, 164 28 Kista, Sweden
Tel: +46 8 752 1527     Fax: +46 8 751 7230     email: hans@sics.se


From rem-conf-request@es.net Thu Jan 14 13:09:36 1993
Date: Thu, 14 Jan 1993 22:00:16 +0100
To: rem-conf@es.net, vinay@eitech.com (Vinay Kumar)
From: hans@sics.se (Hans Eriksson)
Subject: Re: conference services
Content-Length: 468
Status: RO
X-Lines: 15

At 09.57 93-01-14 -0800, Vinay Kumar wrote:
>I have two suggestions in the category of "other services":
>1. Conference Retrieval and Archival Services.
>2. Conference Browsing Services.

Could you elaborate on what you mean? I take as you are talking about a
"recording" of a (real-time) conference, ie multimedia database stuff.

cheers

/hans

Hans Eriksson, SICS, Box 1263, 164 28 Kista, Sweden
Tel: +46 8 752 1527     Fax: +46 8 751 7230     email: hans@sics.se


From rem-conf-request@es.net Thu Jan 14 14:24:42 1993
Date: Thu, 14 Jan 93 13:57:24 PST
From: vinay@eitech.com (Vinay Kumar)
To: hans@sics.se
Subject: Re: conference services
Cc: rem-conf@es.net
Content-Length: 1676
Status: RO
X-Lines: 51

Hans:

These services can easily be termed as database issues, but from conferencing
scenario, as a user, i think these services are very relevant. Reasons are
obvious. 

By Conference Archival, I meant being able to save parts/snapshots of an
ongoing conference. The time-interval for snapshots could either be pre-decided
before the conference begins or could be taken on the fly through a click of
a button. Both time and spatial relation among the multimedia objects need to be
captured. 

Once captured, the conference transcript should be browsable. And once 
retrieved, the transcript of the archived conference could be played back in 
the form of a slide-presentation (say) with parallel/serial presentation of 
archived multimedia objects.

Vinay Kumar
Enterprise Integration Technologies Corp.
459 Hamilton Avenue
Palo Alto, CA 94301

Ph.: 415 617 8014
Fax: 415 617 8019



-----------------------------------------------------
> From hans@sics.se Thu Jan 14 12:57:22 1993
> Date: Thu, 14 Jan 1993 22:00:16 +0100
> To: rem-conf@es.net, vinay@eitech.com (Vinay Kumar)
> From: hans@sics.se (Hans Eriksson)
> Subject: Re: conference services
> Content-Length: 468
> 
> At 09.57 93-01-14 -0800, Vinay Kumar wrote:
> >I have two suggestions in the category of "other services":
> >1. Conference Retrieval and Archival Services.
> >2. Conference Browsing Services.
> 
> Could you elaborate on what you mean? I take as you are talking about a
> "recording" of a (real-time) conference, ie multimedia database stuff.
> 
> cheers
> 
> /hans
> 
> Hans Eriksson, SICS, Box 1263, 164 28 Kista, Sweden
> Tel: +46 8 752 1527     Fax: +46 8 751 7230     email: hans@sics.se
> 
> 

From rem-conf-request@es.net Thu Jan 14 14:55:27 1993
From: Yee-Hsiang Chang <yhc@concert.net>
Subject: Re: conference services
To: hans@sics.se (Hans Eriksson)
Date: Thu, 14 Jan 93 17:33:57 EST
Cc: rem-conf@es.net
X-Mailer: ELM [version 2.3 PL11]
Content-Length: 1474
Status: RO
X-Lines: 32

> 
> >I feel most of the service definitions today focus on the basic services.
> >For example, how is the QOS of a point-to-point service or a multipoint 
> >service.  However, other services are also very important.
> 
> For me, these "higher" services do not feel mature enough to be defined. We
> need to cover some ground and agree on basic concepts and terminology. But,
> I agree, we$d better get moving even here which we already are to some
> extent.
>
> 
I felt the same way before about the *higher* services, but have a 
different view right now.  There are not actually that *high*.  
I have spent some time to figure out how to provide the conference 
scheduling service, and found out this service requires some supports 
from low layer protocols, which don't exist today.  If people design 
the low layer network layers without considering all the possible 
services required by applications, this design is not going to be good.

The conference scheduling requires the supports from multiple protocol 
layers.  It needs the resource reservation to be able to be done ahead 
of time, which includes the reservation protocol to be executed ahead 
of time and the setup of a list of the scheduling information at every 
associated network node.  It also requires the connection setup 
(end system to network, and end system to end system) to be executed
ahead of time.

I also feel that the same argument applies to the security service.

Regards,

Yee-Hsiang

From rem-conf-request@es.net Thu Jan 14 18:12:59 1993
Date: Thu, 14 Jan 1993 19:03:06 MST
From: Richard Schroeppel <rcs@cs.arizona.edu>
To: rem-conf@es.net
Subject: questions
Content-Length: 163
Status: RO
X-Lines: 5

A NASA group is sending an experimental audio feed over a network
called MBONE.  Is there a way for me to receive this feed?

Rich Schroeppel  rcs@cs.arizona.edu


From rem-conf-request@es.net Fri Jan 15 12:21:31 1993
From: Vesa Ruokonen <Vesa.Ruokonen@lut.fi>
Subject: Multicast with Solaris
To: rem-conf@es.net
Date: Fri, 15 Jan 93 22:19:14 EET
Reply-To: Vesa.Ruokonen@lut.fi
X-Mailer: ELM [version 2.3 PL11]
Content-Length: 626
Status: RO
X-Lines: 20

	Hello,

I've Sun Solaris 2.1 workstation and I liked to use it for multicast
experiments. The conferencing software for SunOS (sd/nv/vat/..)
doesn't seem to work straight on Solaris.
I haven't made any multicast modifications in kernel as I
think somewhere was mentioned that Solaris comes with
multicast kernel.

Is there some switch to turn on in Solaris to enable multicast?
Or are there any patches available for Solaris?

Thanks for any help.
-- 
			    Vesa.Ruokonen@lut.fi
			Address: Punkkerikatu 7 B 27
			     53850 Lappeenranta
			          Finland

HELLO!  I'm a .signature virus! Join in and copy me into yours!

From rem-conf-request@es.net Fri Jan 15 14:52:15 1993
Posted-Date: Fri 15 Jan 93 14:43:43-PST
Date: Fri 15 Jan 93 14:43:43-PST
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Re: questions
To: rcs@cs.arizona.edu, rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@CASNER.ISI.EDU>
Content-Length: 733
Status: RO
X-Lines: 14

The MBONE is a virtual network.  It provides IP multicast delivery
using workstation-based multicast routers connected by "tunnels"
that are virtual network links over physical paths in the Internet.

For you to receive the NASA feed or other audio and video traffic
on the MBONE, you need to IP multicast software (kernel additions)
on at least one machine at your site, then get a tunnel set up
between your site and a nearby node on MBONE.  For many sites, the
network provider maintains one or more MBONE nodes and can then
provide tunnels to customers.  I don't think WESTNET is participating
yet.  You should ask them about it.  You can get more information
in the file mbone/faq.txt on venera.isi.edu.
							-- Steve
-------

From rem-conf-request@es.net Fri Jan 15 18:53:42 1993
From: bill@wizard.gsfc.nasa.gov (Bill Fink)
Subject: sd and vat questions
To: rem-conf@es.net
Date: Fri, 15 Jan 1993 21:52:15 -0500 (EST)
X-Mailer: ELM [version 2.4 PL17]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 1022
Status: RO
X-Lines: 25

I had two questions I was wondering if anyone had the answers to.

	1.  Our advertisement for the NASA Select Audio feed expired
	    in sd so I tried to create a new session with the same
	    parameters (multicast address, port number, conference ID)
	    as the expired session but it didn't work.  When I opened
	    the vat session, I was the only participant.  Is this a bug
	    or a feature or what?  Also, might it be related to
	    question number 2?

	2.  I did a 'ps wwax | grep vat' to find the command string
	    sd was using to invoke vat.  However, if I use this identical
	    command outside of sd, once again I don't see any other
	    participants.  Is this the way it's supposed to be?

	3.  I lied.  I just thought of a third question.  Does anyone
	    know what the purpose of the 'Push to talk' button in vat
	    is?  I couldn't find it documented in the man page.

By the way, the NASA Select Audio feed is back and listed in sd as
"New NASA Select Shuttle Audio".

						-Thanks

						-Bill

From rem-conf-request@es.net Sat Jan 16 02:26:21 1993
Date: Sat, 16 Jan 93 14:54:05 +1100
From: bob@cs.su.oz.au (Bob Kummerfeld)
To: rem-conf@es.net
Subject: Solaris 2.1
Status: RO
Content-Length: 446
X-Lines: 11

We just had a SPARC LX with Videopix board delivered to be used, in part, 
for remote conferencing. Unfortunately, the videopix software only runs
under Sunos4.1 and Sun say they won't have a Solaris 2.1 version
available until at least March.

We haven't tried vat/nv etc yet since the shared libraries are different
but will do that soon. I anticipate that they won't work.

Does anyone know if it is possible to run SunOs 4.1.1 on an LX?

Bob

From rem-conf-request@es.net Sat Jan 16 08:04:34 1993
From: Fengmin Gong <gong@concert.net>
Subject: Re: conference services
To: hans@sics.se (Hans Eriksson)
Date: Sat, 16 Jan 93 10:46:55 EST
Cc: rem-conf@es.net, gong@concert.net (Fengmin Gong)
X-Mailer: ELM [version 2.3 PL11]
Status: RO
Content-Length: 1425
X-Lines: 31

> 
> At 11.16 93-01-14 -0500, Yee-Hsiang Chang wrote:
> >o Point-to-point communication service.
> >o Multipoint communication service.
> 
> Maybe "multi-point" should be split into one-to-many and many-to-many, ie
> in a conference everybody is "listening" to only one person, the current
> speaker, (ont-to-many) or everybody listens/sees every other participant
> (many-to-many).
> 

Thinking into future reservation-based network, it should be beneficial to
divide the multipoint conference service further:

(1) many-to-many that fits "brainstorming" kind of conferencing style.
    Everyone is allow to talk freely (with due curtesy :-) and a constant
    video presence of all participants to all others may be necessary.
(2) one-to-many with floating source that fits workshop style presentation
    with questions.  In this case, the active source (the "one") floats in
    controlled way (typpically by the current one).
(3) one-to-many with fixed source that corresponds to broadcast of an
    event and distribution of entertainment video.  A "static" one-to-many
    connection should be sufficient for this type of service.

Actually, these different services not only impact the underlying network
connection requirements, they also raise different requirements for audio
and video capability of end systems, e.g., audio mixing and handling of
multiple video inputs.

Fengmin Gong
MCNC Communications Research

From rem-conf-request@es.net Mon Jan 18 06:53:30 1993
Posted-Date: Mon, 18 Jan 93 14:35:40 +0100
Received-Date: Mon, 18 Jan 93 14:41:49 +0100
To: rem-conf@es.net
Cc: Thierry.Turletti@sophia.inria.fr
Subject: bug-fixed IVS2.0 version for Sun Parallax and HP VideoLive
Date: Mon, 18 Jan 93 14:35:40 +0100
From: Frank Ruge - Technical University of Berlin <frank@prz.tu-berlin.dbp.de>
Status: RO
Content-Length: 3316
X-Lines: 93


you may fetch the tar-file from ftp.prz.tu-berlin.de under
/pub/audio_video/ivs2.0-hp-px.tar.Z

Frank 

==============================================================================

Sun Parallax version
____________________

IVS - Version 2.0 for Sun SPARCstation wth PARALLAX XVideo under SunOS 4.1.x
18.01.93

Video:

The Makefile for the PARALLAX version of IVS is located in the directory
./sun4OS4parallax.
To compile IVS 2.0 with PARALLAX XVideo support you should define -DSUN_PX_VIDEO
and for performance reasons -DLOW_LEVEL wich activates the mmap access to the
PARALLAX XVideo hardware.

The video process will start it's own video window, either in CIF or QCIF size.
The button "local video" doesn't have any meaning because one cannot grab 
any video frame from the PARALLAX hardware without displaying it.

The grabbing performance in QCIF size is better than 10 frames/sec, but the
encoding and decoding time is still dependent on the processor performance :-<.
We weren't able to run this software on a SPARCstation 10 with PARALLAX 
hardware, but it should be much faster.

Edgar Ostrowski 
Frank Ruge
{edgar|frank}@prz.tu-berlin.dbp.de

HP version
__________

IVS - Version 2.0 for HP9000/7xx machines under HP-UX 8.0x

The current version supports the network transparent audio server architecture 
by Hewlett Packard. In order to run IVS audio you should have a HP9000/7xx 
machine with builtin audio or an audio server in your network. The video section 
of IVS currently only supports the HP VideoLive board with the corresponding 
video displaying software (grabber). 

General:

The Makefile for the HP-UX version is located in ./hps700. Please define
-DHPUX to compile it.

You can only use UNICAST - connections, because hpux doesn't support any
MULTICAST - features yet.

Video:

To compile IVS 2.0 with VideoLive support you should define -DVIDEO_HP_ROPS.
Everything else should be no problem.

In order to run IVS 2.0 with video you should start the software included in 
the HP VideoLive product (/usr/local/rops/bin/grabber) and adjust the 
parameters which correspond to the connected camera (eg. timing as PAL, NTSC). 
Afterwards you should simply start ivs. The performance of the video has been 
measured between two HP9000/750.

		CIF		QCIF
grabbing:	200		200	ms
compression:	140-190		40-70	ms
decompression:	600		300	ms

Further work on this version will be to start a display window by the 
XV-protocol which will hopefully be finished before Thierry's version 2.1.

Audio:

To compile IVS 2.0 with hp audio server support you should define -DHPUX. 

If you start ivs on a machine which has no builtin hp audio but you have an audio
server in the net, you can set your environment-variable AUDIO to 'audio-server-name':0
or use the option '-as server name'. 
If you have external speakers, set the environment variable SPEAKER to 'e' or 'E'.

Notice: If you don't have an HP audio server in your net, you have to define -DNOHPAUDIO
        in order to get ivs up and running. Due to a bug in the hp-audio library 
        the audio calls in 8.0x cannot check if there is audio support or not. 

Bugs: If the audioServer is busy, ivs will hang and wait for a connection.
      To avoid this, start a new audioServer-process (/usr/audio/bin/Aserver)
      and restart ivs.

 



From rem-conf-request@es.net Wed Jan 20 09:44:50 1993
From: sylvia@dcs.qmw.ac.uk
Subject: Re: conference services
To: rem-conf@es.net
Date: Wed, 20 Jan 1993 17:42:21 +0000 (GMT)
X-Mailer: ELM [version 2.4 PL13]
Content-Type: text
Content-Length: 1160
Status: RO
X-Lines: 19

I've been wondering why it's only conferencing that people think about in 
terms of support functions in the protocol layers.  There are likely to be 
other applications of multimedia that may need a different kind of support 
from that required by conferencing.  Some of the applications we've been 
developing at QMW for a local multimedia  (analogue) network, use video for 
non-conferencing modes of collaboration. E.g. Video 'bursts' (3-sec video 
connection, either one-way or two-way alternately), also one-to-one open 
channels.  Future applications are planned that will have other 
characteristics, e.g. may use periodic short-duration video connections.  

By contrast, conferencing seems to assume that video channels will most likely 
be long-lived, synchronised with audio, and create a faily constant demand for 
channel bandwidth.  Conferencing can also stand a fairly long set-up time, 
which a brief video stream couldn't. I think there's a danger that by 
concentrating solely on the traditional conferencing-type applications, future 
novel uses of video will not get the services they need. 
I'd welcome comments on this.

Sylvia Wilbur  

From rem-conf-request@es.net Wed Jan 20 10:08:31 1993
From: owens@cookiemonster.cc.buffalo.edu (Bill Owens)
Subject: Office camera possibility
To: rem-conf@es.net
Date: Wed, 20 Jan 1993 12:50:18 -0500 (EST)
X-Mailer: ELM [version 2.4 PL5]
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Content-Length: 1248
Status: RO
X-Lines: 28

Glancing through the latest issue (1/93) of 73 Magazine, I found a
one-page column on ATV (amateur television) discussing an interesting
little camera.  It is described as being 'somewhat smaller than a pack
of cigarettes, has an automatic electronic shutter and includes a
built-in microphone'.  It's also reasonably inexpensive: $204.50
quantity 1, and extremely sensitive, only 0.02 lux.

In addition, the reviewer (Bill Brown, WB2ELK) describes using a bank
of four Radio Slack IR LEDs to illuminate an otherwise dark room, since
the CCD element is sensitive into the near IR. Sounds perfect for those
of us who like subdued lighting in the office.

No, I won't be ordering one, because the boss says I have to do all
these sorts of things on my own time and funds (anybody want to hire an
enthusiastic young network engineer? ;) But here's the info:

GBC CCD-200 camera
CCTV Corporation
315 Hudson St.
New York, NY 10013
800/221-2240

If anybody does get one, I'd like to know how well they work...

Bill.
Bill Owens                                              owens@acsu.buffalo.edu
104E Computing Center                             uunet!acsu.buffalo.edu!owens
Buffalo, NY 12460                                                 716/645-3511

From rem-conf-request@es.net Wed Jan 20 11:04:31 1993
Date: Wed, 20 Jan 93 13:55:14 EST
From: oj@roadrunner.pictel.com (Oliver Jones)
To: owens@cookiemonster.cc.buffalo.edu
Cc: rem-conf@es.net
Subject: Re: Office camera possibility
Reply-To: oj@pictel.com
Organization: PictureTel Corporation
Phone: +1 508 977 8396
Fax: +1 508 532 6893
Status: RO
Content-Length: 381
X-Lines: 13

Bill Owens described this camera:

   GBC CCD-200 camera
   CCTV Corporation


We've got a few of these.  These beasts are *NOISY*, which confounds
predictive compression a bit.

Ollie Jones                  PictureTel Engineering
email:  oj@world.std.com     222 Rosewood Drive
tel:   +1(508)977-8396       Danvers, MA 01923-1393
video: (+1)700-561-9938&9939 fax: +1(508)532-6893

From rem-conf-request@es.net Sat Jan  2 17:35:53 1993
To: rem-conf@es.net
Cc: feiner@cs.columbia.edu, klemets@sics.se
Subject: Experiences with the MultiG workshop
Date: Sun, 03 Jan 93 02:01:26 +0100
From: klemets@sics.se
Status: RO
Content-Length: 4808
X-Lines: 99

I would like to share with you some experiences we made when
transmitting the 5th MultiG workshop using nv and vat.

The workshop took place on the 18th of December and had not been
announced on this list so there were not all that many remote
participants. 

The last presentation at the workshop was performed by Steve Feiner
remotely from Columbia.  This is possibly the first time a scheduled
presentation at a workshop has been given remotely using the Internet.

Since they didn't have any VideoPix cards at Columbia, it was not
possible to use nv to display slides.  Steves participation would have
to be by vat only.  He would still be able to display slides, however,
using other widely available software.

The idea was as follows:  At the auditorium we had a Sparcstation.  It
was used to transmit and receive audio with vat, and video with nv.
The video signal to the monitor was diverted to a projector that
would display the image on an overhead screen.  This would allow
everyone in the auditorium to see what was on the screen.

During his presentation, Steve would log in to this machine and
display his ps-slides with the ghostscript program and a couple of GIF
images using xgif.

We anticipated the problem that Steve would not get the feedback one
would normally get when giving a talk.  In particular, he would not
know when the slides were really visible on the screen.

But we would point a camera at the projection of the Sun screen and
transmit this image using nv.  He would then be able watch the screen
and get the necessary feedback.

When we had decided to use ghostscript, we figured that the graphical
interface and dialogue boxes would be difficult to control remotely.
So we decided that I would control the postscript previewer locally on
his command instead.

Unfortunately, we didn't have the time to do a dry run, and Steve had
hardly ever used sd, vat or nv before.  Both of us being under stress
due to the lack of time, he didn't succeed in receiving the image of
the Sun screen with nv, and thus didn't get much feedback.  He said it
felt like "talking to the void."  We could however keep contact using
the UNIX "write" command since he was logged in on our computer.  He
would ask things like  "Is slide 8 really up?" and I would type "Yes"
for answer.

Much of this could have been facilitated by some sort of distributed
whiteboard program where changes only take effect at the sender when
the changes have become visible at the receiver.

However, the color GIF images that Steve displayed were in the order
of 600 kbytes each.  It was really necessary to transfer them to the
machine at the auditorium prior to the talk.  Transfering them in real
time might not be feasible with the current bandwidths available.

In all, the presentation worked out fine, although the "long distance
phone call" quality of the audio, and 0.5-1.0 second dropouts at
regular intervals annoyed some people in the audience.

If we could refine this technique we would save lots of money whenever
we give a conference by having all of the invited lecturers participate
remotely, instead of buying them a plane ticket. :-)
The time differences between Sweden and the USA will however
complicate scheduling of the remote talks since all of the 
participants are going to want to give their presentations late in the
afternoon, Swedish time...

During the MultiG workshop we also used a video mixer to mix the image
of the speaker with the image from a document camera.  The speaker
would hand copies of his slides on white paper to the person handling
the document camera.  This way we could transmit an image of both the
speaker and his slides using only one VideoPix card.  

The use of a document camera worked out fine, but the speakers were
using normal transparencies to display their slides with an overhead
projector.  So anything that they wrote on their transparencies
during the talk would not be visible in the transmitted image.
Ideally, the speaker would use the document camera himself and the
image from the camera should be displayed on a screen in the
auditorium as well as being fed into the video mixer.

The best setting on the video mixer appeared to be to split the image
vertically in two roughly equally large halves.  The speaker would be
in one half and the slide in the other.

Once we got the focus and lightening right on the document camera, the
slides would actually be readable with nv.

The video mixer has a "hold" feature which we used to take a snapshot
of the current slide.  The next slide could then be prepared while the
first slide was still being shown.  A strange observation was that
while nv was being fed a snapshot of the slide, ie. a still picture,
it would still change the gray levels as if the image had somehow
changed.

Anders


From rem-conf-request@es.net Sat Jan  2 18:22:24 1993
Date: Sat, 2 Jan 93 21:17:25 EST
From: feiner@ground.cs.columbia.edu (Steven Feiner)
To: klemets@sics.se, rem-conf@es.net
Subject: Re: Experiences with the MultiG workshop
Cc: feiner@cs.columbia.edu
Status: RO
Content-Length: 1937
X-Lines: 33

Having undergone my trial by fire without a dry run :-), I'd like to second
many of Anders' observations and add a few of my own.

Part of my problem of "talking to the void" was that the headphone cable
was malfunctioning, so I was forced to switch between the headphones and
the speaker. While using the speaker is supposed to turn off audio feedback
while the mike is on (which I hadn't been told at the time, but should have
guessed), I was still getting some audio feedback, but it was extremely
sparse.  So most of the time, I wasn't hearing or seeing anything in
MultiG-land! 

Anders' comment about the need for transferring stuff in advance was
especially true here. The night before I had ftp'ed over the gif and
PostScript images, and had, in fact checked out whether the remote version
of ghostview was working by running it there and displaying over the net to my
workstation. Even the text slides took an unusably long time to draw this way,
due, in part, I think, to the fact that my relatively spartan
color text slides (made with PowerPoint) had some fairly hefty PostScript
hidden behind their simple appearance.  My guess is that even locally they also
took longer to draw than desirable, and were hardly as fast as switching
real 35mm slides, which tends to throw off any speaker's timing.
Perhaps some of the money saved on speaker airfare could be spent on plenty
of writable disk space to render slides before the talk begins, so that
they can be displayed as pixmaps?  Or we could use slide-show
software that did look-ahead rasterization of the next n slides while
the current slide was being discussed, and then copied the next fully
reasterized slide to the display when the speaker was ready for it.
(Unfortunately, ghostview doesn't have this capability.)

Of course, one problem with having invited speakers participate remotely
is that they're not able to go out to have drinks together afterwards!

Steve

From rem-conf-request@es.net Wed Jan  6 15:10:02 1993
Date: Wed, 6 Jan 93 15:00:30 PST
From: ari@es.net (Ari Ollikainen)
To: rem-conf@es.net
Subject: IVS with Parallax XVideo Card?
Content-Length: 484
Status: RO
X-Lines: 11


Has anyone contemplated using the Parallax XVideo card as frame grabber 
for IVS ?


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ari Ollikainen    ari@es.net     National Energy Research Supercomputer Center
ESnet (Energy Sciences Network)   Lawrence Livermore National Laboratory       
510-423-5962  FAX:510-423-8744   P.O. BOX 5509, MS L-561, Livermore, CA 94550  
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


From rem-conf-request@es.net Thu Jan  7 00:26:14 1993
Date: Thu, 7 Jan 1993 09:21:33 +0100
To: rem-conf@es.net, ari@es.net (Ari Ollikainen)
From: hans@sics.se (Hans Eriksson)
Subject: Re: IVS with Parallax XVideo Card?
Content-Length: 590
Status: RO
X-Lines: 16

At 15.00 93-01-06 -0800, Ari Ollikainen wrote:
>Has anyone contemplated using the Parallax XVideo card as frame grabber 
>for IVS ?

contemplated, yes, done anything, not yet (sigh)

We will however in the coming week give the xVideo-card a shot as just a
replacement for VideoPix as a faster framegrabber. First in my own
vidoeconf implementation, then in nv and ivs probably. It aint nothing
spectacular about it as I do not see any problems, just some work.

/hans

Hans Eriksson, SICS, Box 1263, 164 28 Kista, Sweden
Tel: +46 8 752 1527     Fax: +46 8 751 7230     email: hans@sics.se


From rem-conf-request@es.net Thu Jan  7 02:06:35 1993
To: ari@es.net
Cc: frank@prz.tu-berlin.dbp.de, delamot@wagner.inria.fr, rem-conf@es.net
Subject: Re: IVS with Parallax XVideo Card?
Date: Thu, 07 Jan 93 11:08:00 +0100
From: Thierry TURLETTI <Thierry.Turletti@sophia.inria.fr>
Content-Length: 511
Status: RO
X-Lines: 15



> Has anyone contemplated using the Parallax XVideo card as frame grabber 
> for IVS ?

Franck Ruge <frank@prz.tu-berlin.dbp.de> has ported the IVS 2.0v to the Sun 
Parallax framegrabber and the HP Videolive board aswell as to the HP audio 
server. He placed the sources on marvin.prz.tu-berlin.de:/dist/ivs-px.tar.Z.
I cannot test them because I don't have any Parallax framegrabber and HP
stations but I hope to integrate them in the next release (with Athena Widgets)
of IVS.

Cheers,

  Thierry turletti.

From rem-conf-request@es.net Fri Jan  8 10:57:04 1993
Date: Fri, 8 Jan 93 13:45:48 EST
From: klemets@ground.cs.columbia.edu (Anders Klemets)
To: rem-conf@es.net
Cc: kink@is.rice.edu
Subject: vat and nv recording tools
Content-Length: 279
Status: RO
X-Lines: 7

It has come to my attention that the tar file with the vat and nv
recording programs is missing the source for av_play.  Sorry about that.

I have now put a new version of archive/vat_nv_record.tar.Z that includes
the file av_play.c in the anonymous ftp area at sics.se.

Anders

From rem-conf-request@es.net Sun Jan 10 07:59:16 1993
To: rem-conf@es.net
Subject: home office IANA?
Date: Sun, 10 Jan 93 15:46:04 +0000
From: Jon Crowcroft <J.Crowcroft@cs.ucl.ac.uk>
Status: RO
Content-Length: 539
X-Lines: 16


in the uk, frequencies for radio stations are tightly allocated as a
scarce resource by the home office - although this has led to the
existence of pirate radio (and tv) stations, and to a scarcity of
legal stations, it seems an analagous situation to that faced by the
internet:

is there scope for this role to be taken on by ISOC and on down to
IANA for multicast addresses allocation for conferences...?

(in the long run, i hope such a role would fade away, along with most
forms of government, but...)

cheers
& happy new year.
jon

From rem-conf-request@es.net Sun Jan 10 19:38:56 1993
To: feiner@cs.columbia.edu, klemets@sics.se, rem-conf@es.net
Cc: mankin@cs.wisc.edu, morgan@cs.wisc.edu
Subject: Remote Speaker (Re: Experiences with the MultiG workshop)
Date: Sun, 10 Jan 93 21:00:44 CST
From: mankin@cs.wisc.edu
Content-Length: 507
Status: RO
X-Lines: 18

Hi:

Thanks for your comments on experience with
a remote speaker/as a remote speaker using internet
multimedia.  In several smaller-scale experiences, 
we've started to conclude that it's rather
uncomfortable to be the remote speaker, and that
we need enhancements of the tools to compensate for
the speaker's lack of presence when most other
participants are together.  Would other folks who have
given talks over the net be willing to share their
views on this?

Thanks,

Allison / mankin@cs.wisc.edu

.

From rem-conf-request@es.net Mon Jan 11 02:10:57 1993
Posted-Date: Mon 11 Jan 93 01:44:03-PST
Date: Mon 11 Jan 93 01:44:03-PST
From: Stephen Casner <CASNER@ISI.EDU>
Subject: Re: home office IANA?
To: J.Crowcroft@cs.ucl.ac.uk, rem-conf@es.net
Mail-System-Version: <SUN-MM(219)+TOPSLIB(128)@CASNER.ISI.EDU>
Content-Length: 608
Status: RO
X-Lines: 12

Jon,
        The allocation of frequencies implies an allocation of
bandwidth.  This is not the case for allocation of multicast
addresses.  However, your point is well taken: there may be a role for
administrative coordination of bandwidth for multicast conferences in
the short term because we are beginning to see a number of special
events as well as ongoing services that might want to send multicast
information.  Since there is no mechanism deployed yet for traffic
control (that's the long term), this coordination likely will depend
upon the cooperation of the participants.
							-- Steve
-------

From rem-conf-request@es.net Mon Jan 11 14:32:05 1993
To: mankin@cs.wisc.edu
Cc: rem-conf@es.net
Subject: Re: Remote Speaker (Re: Experiences with the MultiG workshop)
From: Keith Lantz <lantz@vicor.com>
Date: Mon, 11 Jan 93 11:01:17 -0800
Sender: lantz@vicor.com
Content-Length: 794
Status: RO
X-Lines: 17

>> In several smaller-scale experiences, 
>> we've started to conclude that it's rather
>> uncomfortable to be the remote speaker, and that
>> we need enhancements of the tools to compensate for
>> the speaker's lack of presence when most other
>> participants are together.  

This, of course, is one of the fundamental lessons from the distance
learning community. In my own experience with the Stanford Instructional
Television Network, for example, it was imperative (for a "good" lecture)
to have at least a few listeners/students present in the same room with me.
Anyway, anyone who wants to get serious about giving presentations of this
sort really should tap in to the distance learning community. University
organizations such as SITN are often a good place to start.

Cheers, Keith


From rem-conf-request@es.net Tue Jan 12 06:32:35 1993
To: rem-conf@es.net
Cc: Brian.Randell@newcastle.ac.uk, cbergeron@bcr5.uwaterloo.ca,
        dcrocker@mordor.stanford.edu, esprit_co@cica.fr, herve@prl.dec.com,
        obaala@nuri.inria.fr, renaud@orstom.fr, tds@hocus.att.com,
        Swwang@csie.nctu.edu.tw, suda@ics.uci.edu, geir.pedersen@usit.uio.no,
        hoffman@eng.sun.com, bansal@ccrl.nj.nec.com, daniele_pagani@theseus.fr,
        Cormac.Sreenan@cl.cam.ac.uk, josef@iceberg.mpce.mq.edu.au,
        heras@taf.fundesco.es, AHUSAIN@UMAB.BITNET, clfung@uxmail.ust.hkq,
        kfall@cs.UCSD.EDU
Subject: IVS videoconferencing report now available.
Date: Tue, 12 Jan 93 15:20:34 +0100
From: Thierry TURLETTI <Thierry.Turletti@sophia.inria.fr>
Content-Length: 1252
Status: RO
X-Lines: 35



You can retrieve the INRIA report by anonymous ftp on:

  avahi.inria.fr(138.96.24.30):pub/videoconference/ivs_report.ps (Postscript)

Abstract:

This report describes a low-bandwidth videoconferencing application 
on the Internet using the IP multicast extensions and
the User Datagram Protocol (UDP) transport protocol. The video
coder-decoder is a software implementation of the CCITT
recommendation H.261 originally developped for the Integrated
Services Digital Network (ISDN). Until now, H.261 codecs have been
implemented in hardware. We find that the mean output rate of the
coder is less than 30 kb/s, thus making videoconferencing
applications possible over low-speed networks such as the Internet.

After a brief overview of the different data compression
techniques and a description of the recommendation H.261, we
describe in more details IVS, our videoconferencing application
which is freely available in the public domain.



Comments are welcome !!


  Thierry.

-----------------------------------------------------------------------------
Thierry Turletti                             e-mail: turletti@sophia.inria.fr

INRIA Sophia Antipolis FRANCE -- Project RODEO
2004 route des Lucioles, BP 93, 06902 Sophia Antipolis -- FRANCE

From rem-conf-request@es.net Tue Jan 12 10:19:46 1993
Date: Tue, 12 Jan 93 13:06:14 EST
From: Curtis Villamizar <curtis@ans.net>
To: Thierry TURLETTI <Thierry.Turletti@sophia.inria.fr>
Cc: rem-conf@es.net, Brian.Randell@newcastle.ac.uk,
        cbergeron@bcr5.uwaterloo.ca, dcrocker@mordor.stanford.edu,
        esprit_co@cica.fr, herve@prl.dec.com, obaala@nuri.inria.fr,
        renaud@orstom.fr, tds@hocus.att.com, Swwang@csie.nctu.edu.tw,
        suda@ics.uci.edu, geir.pedersen@usit.uio.no, hoffman@eng.sun.com,
        bansal@ccrl.nj.nec.com, daniele_pagani@theseus.fr,
        Cormac.Sreenan@cl.cam.ac.uk, josef@iceberg.mpce.mq.edu.au,
        heras@taf.fundesco.es, AHUSAIN@UMAB.BITNET, clfung@uxmail.ust.hkq,
        kfall@cs.UCSD.EDU
Subject: Re: IVS videoconferencing report now available.
Content-Length: 690
Status: RO
X-Lines: 21


> You can retrieve the INRIA report by anonymous ftp on:
> 
>   avahi.inria.fr(138.96.24.30):pub/videoconference/ivs_report.ps (Postscript)
> 
> Abstract:

> ...                      We find that the mean output rate of the
> coder is less than 30 kb/s, thus making videoconferencing
> applications possible over low-speed networks such as the Internet.

Maybe the Internet is low speed in France, but elsewhere it is not a
low speed network, particularly since you are implying that the
Internet is low speed relative to ISDN.  Perhaps "possible over low
speed attachments to the Internet" would be more accurate.

> Comments are welcome !!

Just a minor comment on the abstract.

Curtis

From rem-conf-request@es.net Tue Jan 12 13:45:28 1993
Date: Tue, 12 Jan 1993 13:04:06 PST
Sender: Ron Frederick <frederic@parc.xerox.com>
From: Ron Frederick <frederic@parc.xerox.com>
To: rem-conf@es.net
Subject: New version of nv (2.3)
Content-Length: 2044
Status: RO
X-Lines: 39

Hello everyone...

I have placed a new version of 'nv' up on parcftp.xerox.com in the file
'/pub/net-research/nv.tar.Z'. This version uses the same on-the-wire
format as the previous 'nv', but it moves the brightness & contrast
controls, as well as the frame rate & bandwidth status information, over
to the receiver windows.

To bring up this new control panel, simply click any mouse button in an
incoming video window and it will pop down. Click again to make it
disappear. For now, I have removed the controls from the sender side,
as the data from the VideoPix card seems to pretty well fill the encoding
space, and you only lose information by doing any mapping there. It is
also somewhat confusing from a UI perspective to have it in both places.
If someone has a compelling reason for it, though, I'm willing to rethink
that decision.

Anders Klemets noticed one side effect of the move about where the
mapping is done -- people playing back nv 2.1 video streams, or receiving
video from nv 2.1 users, need to set both their brightness & contrast values
to 50 in order to prevent any adjustments from being made (since they
were already made at the sender). You're free to make _additional_
adjustments at the receiver, of course, but the default values of 60/60
generally make the picture look too bright in that case.

For those of you planning on watching the SIP broadcast, we'll be using
nv 2.3, so it would be a good idea to pick it up.

I still owe this list some more info about the nv encoding algorithm. I
did get a few requests for it, and it's on my list of things to write up. Also,
it looks like I'll be able to release nv sources real soon -- we've gotten some
of the internal paperwork back, and I'm hoping to have cleaned up copies
of the source for release in a week or two... It'll be released with only the
restriction that it be used for research and evaluation purposes -- if you
have any interest in using the code commercially, you'll have to make
special arrangements.
--
Ron Frederick
frederick@parc.xerox.com

