Handbook of Local Area Networks, 1998 Edition:Applications of LAN Technology
Click Here!
Search the site:
ITLibrary
ITKnowledge
EXPERT SEARCH
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games
EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info
Previous
Table of Contents
Next
GETTING STARTED WITH IP-BASED VIDEOCONFERENCING
Disadvantages and drawbacks aside, it is clear that many organizations will choose IP-based videoconferencing for its:
Low cost of entry (e.g., integration into existing infrastructure).
Low cost of ownership (e.g., low maintenance and no telecommunications charges).
Ease of use (e.g., accessibility of the global IP network compared to ISDN provisioning).
For many years, real-time packetized audio and video over IP networks was tested and used on an isolated portion of the Internet: the multicast backbone (Mbone). The Mbone is operating, expanding, and improving. Some of the protocols used on the Mbone (developed by the Audio/Visual Working Group of IETF) are being ratified by the IETF and have migrated from this relatively exclusive academic and industrial environment into commercial routers for Internet and intranet deployment. Over the next 12 to 18 months, IETF protocols for managing video and audio packets will be widely incorporated in enterprises and on the Internet in general.
This section examines the components users need to add to their desktops for videoconferencing on IP. Local (LAN), metropolitan (MAN), virtual (VAN), wide area (WAN), and global network managers need to modify and prepare their networks to support the types of traffic generated by video-enabled desktops. The scope of these networking component changes and alternatives are discussed in more detail later in the chapter.
Desktop-Enabling Technologies
To experience desktop videoconferencing on the Internet (or intranet) firsthand, the user needs only a camera for video input, a microphone for audio input, speakers (presuming the user wants to hear what others say), software to give the user access to connection initiation and answering, session management, compression algorithms, a video display board, an IP network interface, and a premium CPU.
CU-SeeMe and Other Software Supporting Multicasting
The most unique and proprietary of these desktop components is the user application and interface software. The first, and consequently the most widely deployed, application designed for videoconferencing on the InternetCU-SeeMeoriginated at Cornell University. Distributed as freeware/shareware for the first several years, the application satisfied the needs of many Macintosh users in academic and nonprofit institutions for distance learning and research applications.
In 1995 Cornell University issued an exclusive license for commercial distribution of CU-SeeMe to White Pine Software. Since then, White Pine Software has ported the application to other platforms and greatly enhanced the functionality (e.g., adding color video, password security and whiteboard capabilities).
CU-SeeMe, like three or more competing user applications currently offered on the Internetfor example, VDOnets VDOphone, CineComs CineVideo/Direct, Apple Computers QuickTime Conferencing, and Intelligence at Larges Being Thereprovides a directory management system, call initiation, answering and management software, and some utilities for controlling video and audio quality during a session.
The Desktops Connected to the Mbone
Precept Software has developed multicast audio/video server and viewer products for Windows 3.11, Windows 95, and Windows NT to help enable the PC/Windows world to join the Mbone community. The viewer, called FlashWare Client, can receive Mbone sessions transmitted with Livermore Berkeley Laboratorys vat 4.0 in real-time transport protocol (RTP) mode (selected via the -r option) using PCM, DVI, or GSM audio-encoding algorithms, and vic 2.7 using its default H.261 video codec.
On the Precept web site is a program guide that lists Mbone sessions using these protocols; users can launch the client automatically from there. The client is built as a media control interface (MCI) device driver so it can be invoked through Microsofts Media Player, a Netscape plug-in, or other applications using the MCI API. Playback of audio and video is synchronized using the time-stamping mechanisms in RTP and real-time transport control protocol (RTCP).
The IETF Audio/Visual Transport Working Groups RTP and RTCP protocols have been developed to facilitate the smooth transmission, arrival, and display of streaming data types. When end-point applications support RTP, packets leave the senders desktop with a time-stamp and content identification label. Using this information, and through congestion monitoring facilities at either end, the proper sequences of frames can be more reliably re-created at the receiving station, using a specified delay buffer (generally less than 100 milliseconds). Netscapes Cooltalk is another example of an architecture for streaming video and audio with RTP-ready end-points.
Compressing Audio and Video Streams
In all but the most exceptional conditions (e.g., broadcast-quality production requirements), digital video and audio need to be compressed for superior management. Subsequently, the information must be decompressed (decoded) upon arrival so that it can be displayed on its destination screen.
A comprehensive discussion of compression technology and the ensuing debates over the virtues of different algorithms are not the scope of this chapter; however, it must be noted that digital video compression has a marked impact on the quality of the experience users can expect when videoconferencing over an IP network.
All freeware applications for IP-based videoconferencing bundle a software codec for encoding and decoding the audio and video streams at the appropriate bandwidth for the station. Software codecs deliver lower-quality audio and video than hardware in which there are optimized digital signal processors (DSPs) for these functions. Currently there are no standard compression algorithms for use on the IP-based networks, so users receive the codec specified by the desktop application.
In the case of freeware developed by Livermore Berkeley Laboratory, as well as Apples QuickTime Conferencing, the architecture can accommodate any number of compression algorithms, including H.261, which is the basis of all H.320 systems. The products will comply with a new specificationH.323for videoconferencing over IP networks and use H.261 as a codec; however, a new and more efficient version (H.263) is less bandwidth consumptive and will quickly replace H.261 on IP networks.
Previous
Table of Contents
Next
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited.
Please read our privacy policy for details.
Wyszukiwarka
Podobne podstrony:
490 a06 (488)488 a488 491index (490)485 488487 490szklarnia 1id 490US Army Engineer Diving Operations FM 5 490więcej podobnych podstron