Software Requirements
The MICE project is not primarily funded for software development - it's
role is to integrate the best available tools, and to develop when no such
tools are available. We attempt to only recommend tools that are available
on a wide range of platforms, and do not require excessive additional
expense.
The tools we recommend are mostly based on IP multicast: see the
Mbone FAQ
for more details. This is currently the only way to efficiently support
large group conferences. However all the tools will also run is a "two-way"
or so-called "unicast mode" if required.
All the following software is public domain. However, the source code
is not always freely available.
Video Tools
There are two video tools currently available - IVS and nv. IVS implements
the H.261 standard in software, and is the tool we recommend as it
interworks with hardware H.261 implementations also used my MICE. nv
implements a proprietary compression scheme and the Cell-B algorithm, and
will achieve slightly better quality video at the expense of a higher data
rate. See below for nv information: here we discuss IVS only.
IVS: the INRIA Videoconferencing System
As the bandwidth available on networks and the speed of computers increases,
real time transmission of video between general purpose work stations
becomes a more and more realistic application. However, even with a high
speed network, video has to be compressed before transmission. For example,
sending uncompressed NTSC video requires about 60 Mbits/s. A relatively
simple compression scheme can significantly decrease the rate of video flows
due to the video sequences redundancy.
Video compression is generally performed by some form of differential
coding, i.e. by sending only the differences between two consecutive images.
This leads to highly variable transmission rates because the amount of
information to code between two images varies a lot, ranging from very low
for still scenes to very high for sequences with many scene changes. Packet
switched networks such as the Internet are very well suited for transmitting
such variable bit rate traffic.
Many algorithms have been proposed for the coding of video data. Some of
them have been standardized such as JPEG, for still images, or MPEG and
H.261 for moving images. MPEG coding is suited for high definition video
storage and retrieval. Since the H.261 standard is the best suited for
videoconferencing applications, we chose to implement in software such a
compression scheme in IVS (INRIA Videoconferencing System).
However, this standard was designed for use over the Integrated Services
Digital Network (ISDN), i.e. for a network with fixed rate channels and
packet switched networks such as the Internet do not provide such channels.
Therefore, it is necessary to adapt H.261 in order to use it over the
Internet. We have developed a packetization scheme, an error control scheme
and an output rate control scheme that adapts the image coding process based
on network conditions. The packetization scheme allows the interoperability
with already available hardware codecs.
Our objectives in developing IVS are the following:
Allow low bandwidth videoconferencing
Implement standard compression algorithms
To have a software solution to facilitate the adaptability tonetwork
conditions.
This software brings a new dimension to classic workstations without high
cost since minimal hardware is necessary. For instance, low-quality video
can be sent on a 9600 bps link. Interoperability between IVS and hardware
H.261 codecs such as GPT and Bitfield have been demonstrated as part of the
MICE project. The feedback mechanism which we introduced guarantees that IVS
behaves as a "good network citizen".
See Also
The INRIA Videoconferencing System (IVS) by Thierry Turletti, INRIA.
wb
wb was written by Van Jacobson at Lawrence Berkeley Labs, and currently
represents the state of the art in distributed whiteboard applications. It
is designed as a whiteboard, in that it is intended to be used just as a
real whiteboard is used in a conventional classroom - as such it does not
support advanced drawing functionality, and it limits itself to relatively
simple drawing features. One of its design goals was that new users should
not be intimidated by having to learn a complex new interface. It also
supports the distribution and annotation of postscript foils as a lecturer
would print and use on a conventional overhead projector.
wb supports multiple pages. These can accesed at random, either under the
control of the drawer, or under control of the viewer. There is no concept
of floor control - any user can draw at any time, and all wb's in the same
conference will switch to the same page as the drawer (unless they have
explicitly disabled this feature). wb supports the following features:
- text in several fonts and colours
- free text positioning
- freehand drawing
- straight line drawing
- box drawing
- circle and elispe drawing
- object copying
- object moving
- object deletion
Objects can only be moved, copied, or deleted by their creator. Users have
no control over other users drawing, with the exception of hiding all the
objects belonging to a particular user. This prevents one user disrupting
the conference for all users. wb also allows users to include postscript
images, and to distribute these so that all other users can see and annotate
them. This allows more complex diagrams to be prepared using tools a user is
used to using, and then shared, thus reducing the learning overhead required
to use the whiteboard.
Distribution method wb is an entirely distributed whiteboard, in that each
site keeps a local copy of all the pages of drawings it has seen. The data
comprising the pages is multicast to all remote copies at it is drawn.
Packet loss is dealt with by multicasting retransmission requests, and any
site which did receive the data can then reply using a stochastic algorithm.
This mechanism also copes with re-establishing consistency after network
segmentation, and with late arrivals in the conference. This distribution
model means that wb conferences are very persistent. If just a single
participant leaves their instance of wb running, all the data that wb has
seen will be preserved and distributed to people rejoining the conference.
In large conferences on well advertised addresses, this can be a nuisance,
but for most purposes this behaviour is very desirable.
wb is available for anonymous ftp from ftp.lbl.gov in the directory
conferencing. It has been extensively used and can be considered mature.
There are binary distribution for several architectures. Currently the
following are supported : Sun Sparc, DEC 5000, HP Snake. Source code is not
available.
Software Archives
vat from Lawrence Berkeley Laboratory
- Platforms:
- SUN SPARC station
- SGI station
- DEC station 5000's and DEC alpha workstations
- HP 9000 workstations
- Description:
- Audio-conferencing tool which supports both point-to-point and
broadcasting of audio using multicast IP.
- Audio encoding:
- PCM 64kb/s 8-bits u-law encoded 8KHz PCM
- IDVI 32 Kb/s Intel DVI ADPCM
- GSM 16 Kb/s
- LPC1 18kb/s Linear Predictive Coder
- LPC2 8Kb/s Linear Predictive Coder
- Details
- Ftp-site: ftp.ee.lbl.gov
- Further information: Van Jacobson (van@ee.lbl.gov)
- Contact address: vat@ee.lbl.gov
- Sources: not available.
sd
From: Lawrence Berkeley Laboratory
- Platforms:
- SUN SPARC station
- SGI station
- DEC station
- Description:
- Session directory. SD lists all the multicast audio/video conferences
available on the Internet. Information about each conference is presented
to the user.
- Deatails:
- Ftp-site: ftp.ee.lbl.gov
- Further information: Van Jacobson (van@ee.lbl.gov)
- Sources: not available.
IVS
- Platforms:
- Sun Sparc station + VideoPix or Parallax framegrabber.
- HP station + Raster Rops framegrabber.
- Silicon Graphics station + Indigo framegrabber.
- DEC station + VIDEOTX framegrabber
- Description:
- Audio/video-conferencing tool which supports both point-to-point
and broadcasting of audio/video using multicast or unicast IP.
- Audio encoding:
- PCM 64kb/s 8-bits u-law encoded 8KHz PCM (G.711)
- DVI ADPCM 32 kb/s
- Variable ADPCM (ADPCM + Huffman encoding) (10-30kb/s)
- Video encoding:
- H.261 variable rate. Three formats available:
- SCIF (704x576 pixels)
- CIF (352x288 pixels)
- QCIF (176x144 pixels)
- Deatails:
- FTP site zenon.inria.fr
- ivs3.2-sgi.tar.gz
- ivs3.2-solaris2.2.tar.gz
- ivs3.2-sun4OS4-px.tar.gz
- ivs3.2q-sun4OS4-vfc.tar.gz
- ivs3.2q-src.tar.gz
Deatils
- Details
- From: INRIA Sophia Antipolis - RODEO Project.
- Ftp-site: zenon.inria.fr
- Further information: Thierry Turletti (turletti@sophia.inria.fr)
- Sources: available
- IVS 3.3ms now available
- IVS 3.3ms also supports SunVideo card on Suns, Galileo card on the SG Indy, J300 on DEC the Alpha
- IV 3.3m3 incorporates a congestion control scheme
- ivs3.3m3-src.tar.Z
- ivs3.3m3-sun4OS4.tar.gz
- ivs3.3m3-sun4OS5.tar.gz
- ivs3.3m3-ultrix4.3.tar.gz
- ivs3.3m3-sgi.tar.gz
nv
- Platforms:
- Sun Sparc station with videopix and parallax framegrabbers
- Silicon Graphics station + Indigo framegrabber.
- DEC station 5000 with Jvideo framegrabber
- Description:
- Video conferencing tool with supports both point-to-point
and broadcasting of audio/video using multicast or unicast IP.
- Video encoding:
- Proprietary algorithm.
- Supported formats: 352x288 pixels PAL, 620x240 pixels NTSC
- Details
- Versions 3.2 and 3.3
- ftp-site: parcftp.xerox.com
-
nvbin-3.2-dec5k-jvideo.tar.Z
-
nvbin-3.2-dec5k-pip.tar.Z
-
nvbin-3.2-sgi.tar.Z
-
nvbin-3.2-sun4.tar.Z
- Further information: Ron Frederick (frederic@com.xerox.parc)
- Sources: available
- nv 3.3alpha:
- currently undergoing alpha testing
- supports SunVideo card on Suns, Galileo card on Indys, J300 on DEC alphas
and grabbing from the screen.
- supports Cell-B encoding and decoding
- supports variable size images
- Ftp-site: parcftp.xerox.com
- README_FIRST
- nv3.3alpha-cpv.tar.Z
- nv3.3alpha-irix5.2
- nv3.3alpha-sunos4
- nv3.3alpha-sunos5
- nv3.3alpha.tar.Z