Chapter Contents


2.0 Multimedia PC


2.1 MPC

2.1.1 Architecture

2.1.2 The Motherboard

2.1.3 Faster CPUs vs. DSP


2.2 The Bus

2.2.1 ISA Bus

2.2.2 MCA Bus

2.2.3 PCI Local Bus


2.3 The Display

2.3.1 Cathode Ray Tube (CRT)

2.3.2 Liquid Crystal Displays

2.3.3 Plasma Displays

2.3.4 Video Walls


2.4 PC Modems

2.4.1 V.80

2.4.2 56K MODEMs  V.90


2.5 PC Audio


2.6 CD-ROM

2.6.1 The Physical Medium

2.6.2 Low-level Data Encoding

2.6.3 First Level Error Correction

2.6.4 Subcoding Channels and Blocks

2.6.5 Track Types


2.7 DVD

2.7.1 DVD Features


2.8 MMX


2.9 I/O

2.9.1 Serial Ports

2.9.2 Parallel Ports

2.9.3 Analog Ports


2.10 File Types

2.10.1 MIME

2.10.2 Graphic or Image Formats

2.10.3 Audio Formats

2.10.4 Video Formats


2.11 Video for Windows


2.12 Plug and Play


Assignment Questions


For Further Research



2.0    Multimedia PC



The purpose of this section is to:

•   Identify the basic characteristics of a multimedia PC

•   Examine some of the architectural considerations for a multimedia PC

•   Examine some of the software file types available for a multimedia PC


To provide some insight as to how little understanding there is in the personal computer industry, we should remember these words:

“There is no reason for any individual to have a computer in his home.” Kenneth H. Olson, President of DEC, Convention of the World Future Society, 1977

“640K ought to be enough for anybody.” Bill Gates (Born 1955), Microsoft Founder, in 1981

All general-purpose computers require the following hardware components:

Memory – enables a computer to store data and programs.

Mass storage device – allows a computer to permanently retain large amounts of data. Devices include disk and tape drives.

Input device – usually a keyboard, mouse, or trackball.

Output device – a display screen, printer, or other device that lets you see what the computer has done.

Central processor: the component that actually executes instructions.

Bus – a backplane, which transmits data from one part of to another.

Check out the following:


Computers can be classified by size and power:

Personal computer – a small, single-user microprocessor computer.

Workstation – has a more powerful microprocessor and a higher-quality monitor than a PC.

Minicomputer – a multi-user computer capable of supporting from 10 or more simultaneous users.

Mainframe –a powerful multi-user computer capable of supporting hundreds of users simultaneously.

Supercomputer – an extremely fast computer that can perform hundreds of millions of instructions per second.

2.1     MPC


The MPC Working Group is a part of the SPA[1] IMSIG[2]. Some of the participants are Creative Labs, Disney Interactive, Dell, Fujitsu, Personal Systems, Gateway 2000, Horizons Technology, IBM, Intel, NEC, Philips, Zenith, Quicksilver Software, and Sigma Designs.

Certification from this agency provides some measure of compatibility between various hardware and software vendors. It represents a first and very important step towards standardization of multimedia technology.

The latest standard requirements for personal computers defined by the Multimedia PC Marketing Council is MPC 3.0:


MPC 3.0




Windows 3.11 and DOS 6.0 or binary compatible


75 MHz

RAM (min)

8 MB


8 MB

Floppy Disk

1.44 MB

Hard Disk

540 MB



CD Access

250 mSec

Audio DAC

44.1 KHz, 16 bit

Audio ADC

44.1 KHz, 16 bit


Stereo, MIDI, 3 watts/ch


Color conversion & scaling; direct access to frame buffer for video-enabled graphics subsystem with a resolution of 352 x 240 at 30 fps [or 352 x 288 at 25 fps at 15 bits/pixel, unscaled, without cropping


MPEG1 [hardware or software]


101 keys


Two button

I/O Ports

Serial, parallel, MIDI, game


2.1.1    Architecture

VXI Data Acquisition Handbook by Kinetic Systems Corp.


A multimedia PC must support high quality sound and graphics. One reason for this is the proliferation of computer games. In the future, multimedia PCs will also support business applications such as telephony and video services.

The PC has the potential to become an answering/fax machine, speakerphone, data set, and even a TV. Some of these applications place enormous demands on the PC microprocessor.

Multimedia applications, such as desktop conferencing, require full duplex audio and synchronized video.

2.1.2        The Motherboard



The motherboard design has a profound effect on whether a computer can support a wide range of multimedia applications. It should be noted that Apple has integrated FireWire into its entire product line, thus making it possible to support consumer video editing. This has not happened in the Windows world. Thus video editing is awkward to support.

Multimedia PC Motherboard

This motherboard is used in the PCs in room 228 at Heritage College.

Multimedia PC Block Diagram

2.1.3    Faster CPUs and DSP

There are two basic approaches to supporting video on the PC. One is to perform most of the video processing, by a very fast CPU and the other is to use a separate processor. Consequently, there are both hardware and software video codecs on the market. This is also true of the much less demanding dial up modem as well.

Systems intended for the home or small business applications seldom need a great deal of video processing, and therefore rely upon faster CPUs. High-end workstations for video conferencing or animation require dedicated hardware.

Additional References for Multimedia PCs


2.2     The Bus

A bus is a series of printed circuit tracks that convey signals. When inside the computer, they transfer information in a parallel mode. Outside of the computer, they tend to convey information in a serial mode. Today’s PCs contain one or more internal parallel busses, each with their own strengths and weaknesses.

There are typically 3 busses inside a PC:

Local Bus – this interconnects the microprocessor to cache memory, RAM, any coprocessors and a PCI bus controller.

PCI Bus – These are expansion slots used to connect specialty cards

ISA Bus – This is an older style bus still used for backward compatibility

Besides these, some computers may have one or more of a wide variety of busses.













Obsolete, similar to ISA bus





No bus master support





IBM proprietary, nearly dead





Bus master capable, auto configurable, backwards compatible with ISA, sharable IRQs, DMA channel


25 – 40



Bus master capable, backward compatible with ISA, good for video cards but lacks a bus arbiter.


33 – 66



This is widely used as an expansion bus and has replaced the ISA, EISA, and VESA bus.


2.2.1    ISA Bus

The ISA bus has limited performance, no bus master capability, rows of setup jumpers and switches, and poor electrical characteristics. It uses a two-part connector. It may eventually be replaced with the PCI bus.

2.2.3        PCI Bus

Passive Backplane & PCI Technology by DEC


The PCI bus was developed through the cooperative efforts of Compaq, Digital, Intel, IBM and NCR, in order to create a bus independent of the microprocessor type.

The PCI SIG† group defined the bus to support high-performance I/O devices such as a graphics adapters, hard drive controllers, and LAN adapters. It also supports ‘plug and play’.

The PCI Bus is clocked independently of the microprocessor and operates on 32 or 64 bits of data at a clock speed of 33 or 66 MHz.

With the display and hard drive controller on the PCI bus, the remaining I/O traffic can be supported on an ISA bus.

2.3             The Display


There are several types of displays on the market today:

Cathode ray tubes

Liquid crystal displays

Plasma display panels

The most common desktop display uses a CRT while LCD displays are used for lap top computers and calculators. Although PDP displays are presently quite expensive, they hold the promise of increasing display size while reducing weight.

A computer display operates in a very similar manner to a TV display tube. The principle difference is the increased resolution and lacing method. TV display tubes use scan interlacing in order to reduce image flicker. Most computer display tubes use progressive scanning and therefore require higher refresh rates.

Because PCs store color information digitally, it is possible to get much better and consistent color imaging than the standard NTSC analog video display.

Television display tubes have a fixed aspect ratio [4:3] and display about the same number of lines [~483]. Computer display tubes vary both of these parameters.

2.3.1    Cathode Ray Tube (CRT)

Understanding the Operation of a CRT Monitor by National Semiconductor

Monitor VGA by National Semiconductor


Computer monitors are very similar to color TV tubes. However, there are some fundamental differences:


TV Monitor

Computer Monitor




Horizontal scan rate

15.75 KHz [NTSC]

25 – 80 KHz

Video signal

Composite video


RGB video


Specified in lines

Specified in pixels

Aspect ratio




Although all tubes can properly be called cathode ray tubes, the term is primarily applied to display tubes.

An electron beam is generated and accelerated towards a phosphor layer. The phosphor emits light proportional to the beam intensity. The beam intensity and hence image brightness is varied by means of a voltage applied to the control grid.

The phosphor screen is coated with an aluminum layer, which performs three important tasks:

•   It acts as an anode and attracts the beam

•   It prevents electrons from accumulating on the screen and therefor repelling the beam

•   It increases the screen brightness by reflecting the emitted light forward

The beam scans across and down the phosphor screen, starting in the upper left-hand corner

TV Monitors

The greater the number of lines, the greater the resolution and bandwidth required. In North America, there are 525 lines in a complete frame, occupying a bandwidth of about 6 MHz whereas in Europe there are typically 612 lines requiring an 8 MHz bandwidth.

If all of the lines were scanned at the line rate, the required bandwidth would be enormous. To avoid this, the frame is broken into 2 fields, each containing every second line in a picture. In North America, there are 30 frames, hence 60 fields per second. This process is called interlacing[4].

Since the video signal to the computer monitor is not broadcast, the bandwidth limitations do not apply. As a result, progressive scanning is used to form the image.

This difference makes it awkward for computers to display TV images.

Color TV CRT

The color picture tube contains three electron guns, one for each primary color. The phosphor layer it is composed of a pattern of dots, or bars of three different phosphors. It is important that the beam associated with each color strike only its own color phosphor. For this reason, a shadow mask with some 200,000 holes is placed above the phosphor layer.

Beam, mask and phosphor alignment requires precision similar to the manufacturing of semiconductors. To ease the convergence and alignment difficulties, many CRTs place the three electron guns side by side.

Beam Deflection

An electron beam will alter its trajectory if it encounters either an electric or magnetic field.

All CRTs use an electric field to propel the beam to the anode and some even use it to deflect it sideways. Electric field deflection supports variable rate scanning and therefor finds its main use in oscilloscope displays.

Magnetic field deflection is limited to fixed rate scanning applications, such as TV and computer monitors. Since magnetic deflection is more effective than electric deflection, the CRT has a shorter neck and wider screen

Electromagnetic Beam Deflection

Scanning is performed simultaneously in two directions to create an image. The vertical deflection occurs at about 60 Hz, and the horizontal deflection at about 15.75 KHz.

The beam is visible only when it moves from left to right. It is turned off, or blanked during the retrace period (moving from right to left).

An enormous back EMF is created when the magnetic fields are turned off. The flyback from the horizontal circuit is in the region of kilovolts, and is used to supply all of the high voltages needed to operate the CRT.


As an electron beam travels down a tube, it has a tendency to spread since the electrons repel each other. If this were allowed to continue, the electron beam would be very large when it strikes the phosphor screen and the image would be blurred. To prevent this, the beam is focused. This can be done magnetically or electrostatically.

Electromagnetic Focusing[5]

Electrostatic Focusing[6]


To obtain a stable display, it is necessary to synchronize the television display and originating imaging device. This is accomplished by means of horizontal and vertical synchronization pulses.

Horizontal Sync

A horizontal sync pulse occurs at the end of each scanned line, and forces a retrace. In broadcast television, this originally occurred at a rate of 15750 Hz, but with the advent of color, this was changed to 15734.26 Hz. To prevent visual artifacts during the retrace, the beam current is switched off. This is accomplished by a blanking pulse, which corresponds to a black level. The synchronization pulse is placed on top of the blanking pulse and therefore corresponds to a blacker than black level.

The actual luminance information is placed between adjacent horizontal blanking pulses.

The spectral components of both the blanking and sync pulse will be spaced at multiples of the line rate, but the zero crossings of the spectral envelope will differ because of the difference in duty cycle. In order that the spectral components of these two pulses will add vectorially in phase, the sync pulse is offset from the center of the H blanking pulse by approximately 1.65 ΅Sec.

The spectrum of the horizontal blanking pulse resembles:

One interpretation of this waveform is that the continuous video information is being interrupted or multiplied by the synchronization process. Mathematically, the multiplication of components in the time domain results in additions and subtractions in the frequency domain. Consequently, the video information signal appears to be added and subtracted, or clustered around the spectral components of the synchronization and blanking pulses.

Frequency Range


Rate of decrease

[dB per octave]

0.0157 - 0.75


0.75 - 4


> 4



Vertical Sync

A vertical sync pulse is created by inverting a series of horizontal sync pulses. The receiver uses a simple integrator and threshold detector to trigger the vertical retrace.

Equalizing Pulses

To interlace an image, the first visible odd line appears at the upper left hand corner of the CRT, and the first visible even line appears at the top center.

If this timing is not exact, the space between the interlaced lines will not be even, and line pairing will occur. Consequently, six equalizing pulses are placed before and after the vertical sync pulse. They are one half the pulse width and occur at twice the rate of the horizontal sync pulses.

Therefore, the relationship between the horizontal pulses and vertical pulses for the odd and even fields are not quite the same.

Color Burst

Color information is conveyed by varying the amplitude and phase of a 3.58 MHz carrier. A reference signal burst is sent to help decode the color correctly,. This color burst signal is located on the back porch of horizontal blanking pulses.

All these signals require extreme precision.

2.3.2        Liquid Crystal Displays

The TFT500 Flat Panel Monitor: Key Technologies by Compaq Computers

A flat panel display is not easy to make.[7] Most suffer from poor brightness, low contrast, low refresh rate, or limited viewing angle.

Most LCDs uses nematic liquid crystals, and are known as a twisted nematic device. The liquid crystal layer is approximately 10 microns thick. In the normal state, the liquid crystal molecules form a twisted structure. An optical filter polarizes light entering the display. The liquid crystal rotates the polarized light and allows it to pass through the bottom filter hence it appears transparent. When energized, the molecules line up. The polarized light doesn’t rotate through the LCD and is blocked at the bottom filter.

Color LCD

A color display can be made by adding RGB filters.


2.3.3    Plasma Displays

Plasma displays excite phosphors in a low-pressure mixture of neon/xenon gas. Three phosphor cells form a single pixel, each of which is excited by an anode and cathode. The display uses the same phosphor as conventional CRTs.

These screens can be made quite thin, bright and large however, the control circuitry is quite complex, and the viewing angle tends to be limited.

Color displays operate in the same way that a fluorescent tube works. The ionized gas creates ultra violet light, which strikes the phosphor coating and is reradiated at a different wavelength.

2.3.4    Video Walls

Projector arrays can be used to create huge display areas. These often have to be designed to dissipate considerable heat.


2.4     PC Modems


Most modems communicate with the host over serial port I/O port.

In the past, modems were external devices and communicated to a PC via a UART and serial port.

Programs such as Crosstalk, Procomm, and Windows Terminal use COM1 to COM4 to send AT Commands to the UART, telling the modem which functions to perform. DSP chips implement intensive signal processing functions such as the modem data pump, DTMF generation and detection. Telephony codecs are used to perform the sampling, A/D, D/A, and filtering functions, and a DAA† provides electrical isolation from the local telephone loop.

Bit rates through the PSTN are limited by two characteristics of the BORSCHT interface:

•   Quantization noise created by the A-law and ΅-law codecs

•   Band limiting by the anti aliasing filters

2.4.1    Software Modems


Traditionally, modems have performed the following basic functions:

Receive and transmit signals.

Convert signals from digital to analog or vice-versa. This is known as modulation or de-modulation of the signal and is performed by the Digital Signal Processor (DSP).

Compress and Decompress data.

Change a single stream of data into bits to be placed on the system bus. The UART chip is responsible for this function.

Software modems perform most of these functions in the microprocessor and emulate the UART.

Software modems are also known as controller-less modems, host signal processing (HSP) modems, or winmodems. Two popular chipsets are the RPI and the HSM chipsets.

2.4.1    V.80

V.80 is defined by the H.324 standard. It:

•   Supports synchronous data streams over asynchronous modem connections

•   Supports dynamic rate adaption to adjust to line conditions

•   Communicates lost packet information to the application

This standard was designed to support real time audio and video.

2.4.2    56K Modems

Some 56K modems support V.80 video conferencing and full duplex speakerphone. They attempt to connect at 56K and fall back in 2 K increments to the highest bit rate loop connections permit.

The modem signal passes through a BORSCHT line interface in the central telephone office. Quantization noise in the codec limits the up link to about 35 Kbps. Since the ISP uses a digital connection to the network, the download speed can be slightly higher and may approach 56K.     V.90

The V.90 standard is intended to harmonize various proprietary schemes.

V90 Technology by 3Com

V90 – The ITU 56K Standard by Compaq


6.1     DSL Modems


ADSL Tutorial

VDSL Tutorial

TR-001 ADSL System Reference Model

TR-003 ADSL Packet Framing

TR-007 CPE Interfaces

TR-017 ATM over ADSL

The ADSL Forum was formed in December of 1994 to promote ADSL development, system architectures, protocols, and interfaces.


Line Type

Data Rate



Asymmetric digital subscriber line

1.5 to 9 Mbps down

16 to 640 Kbps up

Internet access, VoD, remote LAN, interactive multimedia


Digital Subscriber Line

160 Kbps

2B+D ISDN, voice and data


High data rate digital subscriber line

1.544 Mbps [2TP]

2.048 Mbps [3TP]

T1/E1 service feeder plant, WAN, LAN access, server access


Single line digital subscriber line

1.544 Mbps

2.048 Mbps

Same as HDSL plus premises access for symmetric services


Very high data rate digital subscriber line

13 to 52 Mbps down

1.5 to 2.3 Mbps up

Same as ADSL plus HDTV [Also called BDSL, VADSL, or, ADSL]


A number of services can be supported by these new transmission formats:

·         Video on demand and Internet access can be supported on 18 Kft, 1.5 Mbps loops.

·         Digital live television can be supported on 4.5 Kft, 6 Mbps loops. This is the main telco interest in digital TV

·         HDTV, requiring 20 Mbps, can be supported over the shortest loops.

Additional tutorials:

3Com - xDSL Loop Technology

Cisco - DSL


Domestic applications such as VoD, home shopping, Internet, remote LAN access, multimedia access, specialized PC can be supported, even when the ratio of downstream to upstream traffic is 10 to 1.

The ADSL bit rate is a function of distance:

Distance [Kft]

Bit Rate [Mbps]










Upstream speeds range from 16 to 640 Kbps depending on the manufacturer. However, in all cases ADSL operates in a frequency band above POTS. Thus POTS service is guaranteed even if the modem fails.

ADSL can be used for circuit switched, packet switched and eventually, ATM data.

·         CAP† ‑ a version of suppressed carrier QAM. For passive NT configurations, CAP would use QPSK upstream and a type of TDMA for multiplexing. TR-015 CAP Line Code

·         DMT†‑ a multicarrier system using the DFT to create and demodulate individual carriers. For passive NT configurations, it would use FDM for upstream multiplexing. TR-014 DMT Line Code


CAP is similar to QAM. CAP performs the orthogonal signal modulation digitally. The digital input is divided into two streams and passed through a digital transversal bandpass filter. The two filters have equal amplitude response but a p/2 difference in phase response. This is known as a Hilbert pair. The signals are then combined and converted to the analog domain before being transmitted. This process allows more of the device to be implemented in silicon than the equivalent QAM modem.


The DMT technique divides a high bit rate serial data into numerous low speed sub-channels. Each sub-channel modulates its own carrier. Multi-carrier techniques require a great deal of digital processing. DMT is closely related to OFDM† or C-OFDM†, which is used in European DAB†.

The ANSI T1.413 DMT standard specifies 256 subcarriers, each with a 4 KHz bandwidth. They can be independently modulated from zero to a maximum of 15 bits/sec/Hz. This supports up to 60 Kbps per tone. Some implementations support 16 supporting rates of 64 Kbps per tone.

The specification allows modems to dynamically adjust both the bit rates and sub-channels used.

Bit rates of 10 bits/Hz are typical in the low frequency sub-bands. This is usually reduced to 4 bits/Hz or less if the line conditions deteriorate or cross-talk increases.

6.2     Cable Modems

Some tutorials on cable modems:

ADC - Cable Modem Overview

CableLabs - Cable Data Modem

NextLevel - Cable Modems Tutorial

Cadant - Cable Modems

CableLabs - Terayon Cable Modem


PHY features [Downstream]:

Based on North American Video Transmission Specs

64/256 QAM

Concatenation of Reed-Solomon and Trellis FEC

Variable depth interleaving supporting both latency-sensitive and latency-insensitive data

Contiguous serial bit-stream with no implied framing providing complete MAC/PHY decoupling

PHY features [Upstream]:

QPSK and 16QAM

Multiple symbol rates

Frequency agility


Support of variable length and fixed frame PDU formats

Programmable Reed-Solomon block coding

Programmable preambles

Minimal coupling between physical and higher layers accommodating future PHYs

The IEEE 802.14 Working Group is defining standards for data transport over cable TV networks. The reference architecture specifies a hybrid fiber/coax plant with an 80-km radius. The goal is to support ethernet connections however, it may be adapted to support ATM.

Cable modems provide rates up to 10 Mbps. However, this is reduced as more subscribers are connected to any given segment. Since segments are shared, there are some security and privacy concerns.

Most cable modems use one of the 6 MHz TV channels above 50 MHz for a downstream channel although this may migrate to 550 MHz. It can operate at about 30 Mbps using 64 QAM. Information is sent in the downstream channel, by cells or packets, addressed to a specific end-user.

The upstream channel is located between 5 and 50 MHz. Upstream rates in low megabits should be available on good HFC systems. It has a media access control that places user packets cells onto a single channel. Collisions are avoided by sending control signals in the downstream.

Some cable modems assign upstream frequency channels to each user. Others combine the TDMA with FDMA. A few are proposing CDMA. Data rates do not depend upon cable length, since repeaters in the network boost signal power. Rather, capacity depends on system noise and number of simultaneous users.

2.4             PC Audio

A Desktop PC Audio Design by Cirrus Logic

SoundPort Codecs


The standalone sound card subsystem has a slightly different analog interface than a modem, but has a similar range of I/O locations. It also supports WAV files and perhaps an FM synthesis chip. The codec has a sampling rate of 44.1 KHz, a resolution of 16 bits per sample, and provides CD quality stereo.


In the past, audio and modem cards were implemented as separate entities. However, new applications such as voice over the Internet and full duplex speakerphone are more easily supported if these functions are implemented on the same card.


DSPs perform the intensive signal processing tasks of playing and recording WAV files, and FM synthesis. The Soundblaster protocol is much slower and can be implemented in software on the card, PC, or an ASIC device.

2.6     CD-ROM

The CD-ROM has become a necessary part of any personal computer. For more detailed information, please refer to the following applications notes.

A Primer on CD-R

A Fundamental Introduction to the CD Player by K.M. Buckley

OSTA Universal Disk Format Specification



Access Time


Bit Rate



















•   High information density  ‑ It can contain 540 megabytes of data on a 120-mm disc

•   Low cost ‑ The large quantity per unit cost is less than two dollars.

•   Read only medium ‑ It is useful for electronic publishing, distribution, and access, but cannot replace erasable magnetic disks.

•   Modest random access performance – The access time is less than floppies but more than magnetic hard disks.

•   Robust, removable medium ‑ The CD is made of plastic and the encoding method makes it scratch resistant. Unlike hard drives, they can be removed.

•   Multimedia storage ‑ CDs are able to store text, images, graphics, and sound.

2.6.1    The Physical Medium

A CD is a plastic disk 1.2 mm thick and 120 mm in diameter. Information is encoded in a plastic covered spiral track on the top of the disk. A non-contact head optically reads the track. It is scanned at a constant linear velocity [CLV] thus providing a constant data rate. The disc rotates at about 250 rpm when reading near the outside edge and 500 rpm near the center.

The track consists of a reflective layer of shallow pits and lands. A low power laser beam is focused on the spiral layer and is reflected back into the head. The amount of reflected light varies depending on whether it strikes a pit or land. This is converted to an electrical signal by a photodetector.

2.6.2    Low-level Data Encoding

The current CD scheme encodes marks as transitions from pit to land or land to pit and spaces as constant pit or land.

Each 8-bit data byte is encoded into 14 bits on the track. This scheme is referred to as EFM† coding.

Three merging bits are added between each set of 14 track bits. These limit the run length and reduce the data signal DC content. Thus, an eight-bit byte of actual data is encoded into a total of 17 channel bits: 14 EFM bits and 3 merging bits.

Each frame contains a synchronization pattern, twenty-four data bytes, eight error correction bytes, a control and display byte, and merging bits. The 588 channel bits are arranged as follows:



Sync Pattern

24 + 3 channel bits

Control and Display

14 + 3

Error Correction

4 x  (14 + 3)


12 x (14 + 3)

Error Correction

4 x (14 + 3)


Consequently, 192 data bits (24 bytes) are encoded as 588 track bits.

2.6.3    First Level Error Correction

CIRC† is used to minimize errors caused by disk damage. Rather than data being placed sequentially on the disk, it is distributed. Therefor local disk damage does not obliterate a continuous data block. This breaks long error bursts into shorter ones. Error correction is provided by Reed-Solomon coding.

•   CDs have modest seek time and high capacity. As a result, the High Sierra standard makes tradeoffs that reduce the number of seeks needed to read a file at the expense of space efficiency.

2.7     DVD

The digital videodisk can store 13 times the data of a conventional CD giving it a playing time in excess of 2 hours. About 92% of all movies ever made can be recorded on a single DVD. The picture is nearly studio quality and better than VHS videotape or laser disc. It can contain 8 sound tracks, and 32 subtitles.

Digital Video Disk Proposed Specifications

Disk Diameter

120 mm [5 inches]

Disk Thickness

1.2 mm [two 0.6 mm disks back to back]

Memory Capacity

5 G bytes per side

Track Pitch

0.725 ΅ meters

Laser Wavelength

650/635 nanometers

Numerical Aperture


Error Correction

RS-PC [Reed Solomon Product Code]

Movie Running Time

142 minutes per side

Average Movie Data Rate

4.69 Mbps

Broadcast Running Time

74 minutes per side

Average Broadcast Data Rate

9 Mbps


A single DVD can store movies in multiple formats:

•   Pan and scan version — fits the picture for the display height. The outside margins can be seen by panning the image

•   Letterbox version — fits the picture to the screen width, thus displaying a black region above and below the picture

•   Special anamorphic version — supports high resolution full screen display on widescreen televisions


Although a DVD is the same size as a CD, it has a larger capacity because the pits are smaller, the spiral tighter, and it can have two layers on each side of the disc.

DVD and CD Characteristics






120 mm


0.6 mm

1.2 mm

Track Pitch

0.74 mm

1.6 mm

Minimum Pit Length

0.40 mm

0.834 mm

Laser Wavelength

640 nm

650 and 635 nm (red)

780 nm (infrared)

Capacity (per layer)

4.7 GB

0.68 GB




Numerical Aperture



Reference user data rate

Mode 1: 153.6 Kbytes/sec

Mode 2: 176.4 Kbytes/sec

1,108 Kbytes/sec, nominal

Video data rate

1 to 10 Mbps variable (video, audio, subtitles)

1.44 Mbps (video, audio)

Video Compression



Sound tracks

Mandatory (NTSC): 2-channel linear PCM and/or 2-channel/5.1-channel Dolby Digital™ (AC-3). Optional:  up to 8 streams of data available

2 Channel-MPEG


Up to 32 languages

Open caption only


2.7.1    DVD Features


Because DVD is a disc-based medium rather than tape, it is possible to pause, play in slow motion or fast-forward. These random access features allow for multiple movie endings, interactive video games, multiple camera angles, etc.

Parental Control

Parental control allows parents to password protect programs that they do not want children to view. A variation of this lockout facility allows different versions of a movie to be stored on the same disc: the director’s cut, an R-rated version and a PG-13 version.

Closed Captioning and Multiple Languages

The DVD format can support up to eight languages for a single movie. It also supports 32 closed caption tracks.

Theater Quality Audio

DVD incorporates either Dolby’s AC3 Surround Sound or MPEG-2 audio. AC3 provides six channels and MPEG-2 provides up to eight.

Dual Layer Process

For the 8.5 Gbyte DVD, the second information layer may be molded into the second substrate or it may be added as a photopolymer (2-P) layer. In either case, a semi-reflector layer is required to allow both information layers to be read from one side of the disc.

For the 17 Gbyte DVD, it is necessary to produce two dual-layer substrates, and bond them together.

Additional References on DVD

2.8     MMX

The MMX Pentium processor from Intel supports 57 instructions to accelerate various multimedia functions. These instructions are carried out in the 8 existing floating-point registers. Since MMX instructions are performed in the integer mode, they do not speedup 3D graphics; however, they do accelerate rendering calculations.

The principle advantage of MMX is in executing SIMD† operations.

Additional References on MMX


2.9     I/O

Much of the following information was obtained from

Input/Output ports are used to connect different devices to the computer. Until recently, different ports were required to connect different devices. Some of the more noteworthy I/O ports are:


PS-2 Port

Keyboard Port



Universal Serial Bus

1394 FireWire







Game Port



2.9.1    Serial Ports     PS-2 Port

The PS2 port was developed by IBM to support a keyboard, mouse, trackball and touch pad.

PS2 ports use synchronous serial TTL signals to communicate between the keyboard or mouse to the computer.










+ 5 Volts






Bi-directional communications is possible because of open collector drivers allowing either end to force a logical 0. The host can force a retransmission by forcing the clock line to zero.

The data line transition is made on the falling edge of the clock signal and is usually sampled when the clock is low. Each data packet is composed of 11 bits, 1 start bit (low), 8 data bits, 1 odd parity bit and 1 stop bit (high).     Keyboard Port

PC and XT models used a simple unidirectional serial port however; newer computer keyboard ports are complex.

The keyboard port connector is usually a 5-pin DIN connector, but IBM computers use the PS/2 connector. IBM as also introduced a modular AMP jack at the rear of the keyboard.     RS-232

RS-232 was adopted by the EIA† in 1960 and evolved to RS-232D by 1987.

Most equipment RS-232 serial ports use a DB-25 connector however, many PC use DB-9 connectors since only 9 signals are needed. Normally the male connector is on the DTE side and the female connector is on the DCE side.

Inside a computer, data is handled in a parallel fashion. To get out of the computer it first passes through a UART that converts it to a serial stream and out the serial port to the DCE. Some computers have modems built into them. Consequently, there may be hardware conflicts between the serial port and computer, in which case the external serial port has to be disabled.

An asynchronous modem adds start, parity and stop bits to each character in the serial string before transmission. This constitutes a data frame.

The idle state is always taken as a high, and start is active low. There can be anywhere from 5 to 8 data bits in a frame, but most BBS networks use 7 or 8 bits.

Since parity bits are ineffective for detecting more than a one-bit error, the communications software often sets the parity to none. However, if it is used, even parity means that the parity bit is set to a logical 1 if there are an even number of bits in the data word. Odd parity is the reverse.

Two modems must use the same type of frame in order to communicate. Most BBS networks use: 8 data bits, no parity, and one stop bit. This is sometime written as 8, N, 1. Another popular arrangement is 7, E, 1.

The RS-232 is widely used to interconnect devices located within 15 meters of each other. Longer distances can be achieved, but at lower bit rates.

European equivalent

V.24 and V.28

Typical 1 level

-5 to -15 volts

Typical 0 level

5 to 15 volts

Maximum bit rate

19.2 Kbps


RS-232 DCE Connector



Interchange Circuit











Chassis Ground





Transmit DATA





Received DATA





Request to Send





Clear to Send





Data Set Ready






Signal Ground





Carrier Detect























Secondary Received Sig Det





Secondary Clear to Send





Secondary Transmit DATA





DCE Timing





Secondary Received DATA





DCE Receiver Timing











Secondary Request to Send





Data Terminal Ready





Signal Quality Detector





Ring Indicator




Data Signaling Rate Selector





DTE Transmitter Timing








Interchange Circuit Category












9 Pin Serial Port Assignments






Data Carrier Detect



Transmit Data



Receive Data



Data Terminal Ready



Signal Ground



Data Set Ready



Request to Send



Clear to Send



Ring Indicator     IrDA

The infrared port is used in portable devices such as notebook computers, PDAs, mobile phones and other handheld devices.

It is a half-duplex system with the maximum data rate of 115.2 Kbps. The IR LED peak wavelength ranges from 0.85 um to 0.90 um.

IrDA Infrared Link Access Protocol (IrLAP)

IrLAP is a variation of HDLC with three types of frames:

U frames ‑ used to establish and remove connections and discovery of other station device addresses etc.

S or supervisory frames ‑ assist in the transfer of information, they may be used to specifically acknowledge receipt of I frames convey ready and busy conditions.

I or information frames ‑ transfer information from one station to another

The protocol includes procedures for connection startup, data rate negotiation, and information transfer.     USB

USB is a low cost port, which supports isochronous, bulk, interrupt and control data transfer types. It uses tokens, and transfers data at 1.5 or 12 Mbps.     1394 FireWire

IEEE 1394 is a ‘plug and play’ scaleable, flexible, low-cost digital interface for consumer electronics. It defines the media, topology, and protocol. It supports both arbitrated asynchronous and isochronous data.

It can be implemented as a backplane or a point-to-point cable. The backplane version operates at 12.5, 25 or 50 Mbps.  The cable version operates at 100, 200 or 400 Mbps, contains two power conductors, and two shielded twisted data pairs.

The cable provides 8 to 40 Vdc at 1.5 amps.

The connector is derived from the Nintendo GameBoy. It is small and easy to blindly insert. There are no terminators, or manual IDs to be set. The bus supports up to sixteen hops between any two devices. A splitter can be used to provide another port.

It is a peer-to-peer interface allowing several computers to share the same peripheral without additional hardware.

The bus segment can connect 63 devices each device up to 4.5 meters apart. Over 1000 bus segments may be connected by bridges.

2.9.2    Parallel Ports

The parallel port sends data through 8 separate wires and can achieve faster communications than serial ports since there is no need to encoding and decoding the signal. Simple parallel ports send data at 115,200 bps and new enhanced ports are 100 times faster.     SPP

The standard parallel port was developed by Centronics. However, IBM chose the DB-25 connector. Consequently, a special adapter cable or printer cable is now a standard accessory.




Centronic 36





Data 0

Data 0


Data 1

Data 1


Data 2

Data 2


Data 3

Data 3


Data 4

Data 4


Data 5

Data 5


Data 6

Data 6


Data 7

Data 7








Paper End

Paper End





Auto Feed

Auto Feed








Select In















































Select In


The major signals on these lines are:

Strobe ‑ The strobe tells the printer when to sample the information of the data lines, it is usually high and goes low when a byte of data is transmitted. The total time needed to transmit a full byte is around two microseconds.

Data ‑ These lines carry the information and special codes to set the printer in different modes like italics. These lines function with standard TTL voltages, 5 volts for a logical 1 and 0 volts for a logical 0.

Acknowledge ‑ This line is used for positive flow control. It’s normally high and goes low for about 8 microseconds when it has received the character.

Busy ‑ Each time the printer receives a byte this line will send this line high to tell the computer to stop sending. When the printer is done printing or putting the data in the buffer or setting it’s internal functions, it then goes low.

Paper End ‑ This line will goes high when the printer runs out of paper. When this happens the busy line will also go high so the computer stops sending data.

Select ‑ This line tells the computer when it is selected or online.. When it is low, the computer will not send data.

Auto Feed ‑ Not all printers treat the carriage return the same way. Some just bring the print head to the beginning of the line while others also roll the paper one line up. Many printers have a switch to tell how to interpret the carriage return.

Error ‑ This is a general error line. There is no way of knowing the exact error from this line. When an error is detected, it goes low. Some errors are: cover open, print head jammed, a broken belt and so on.

Initialize Printer ‑ This line is used to reinitialize the printer. This occurs at the start of a print job, since special formatting codes might have been sent to the printer during the previous job.

Select Input ‑ Some computers can control whether the printer is online or not.

Ground ‑ This is a regular signal ground and is used as a reference for the low signal or logical 0.     EPP

This port is very similar to the SPP port.






Address/Data 0


Address/Data 1


Address/Data 2


Address/Data 3


Address/Data 4


Address/Data 5


Address/Data 6


Address/Data 7




Wait (paired with 16)


Paper End




Data Strobe




Initialize Printer (paired with 11)


Address Strobe


Ground (Data)


Ground (paired with 1)


Ground (paired with 10)


Ground (paired with 12)


Ground (paired with 13)


Ground (paired with 14)


Ground (paired with 15)


Ground (paired with 17)     ECP

ECP† adds two different modes of communications, a fast two-way mode and another using RLE compression. ECP is backward compatible with older printers and devices. By negotiating with the printer, the port will automatically transfer the data in the best and fastest possible way.

The ECP port can also accept multiple devices. It uses its own addressing scheme, and sends a channel address command on the parallel port bus. This gives it the ability to support 128 different devices or channel addresses.     SCSI

Advances in SCSI Parallel Interface technology by the SCSI Trade Assn.

Parallel SCI by DEC

Ultra2 SCSI by the SCSI Trade Assn.


SCSI† is both a bus specification and a command set. It allows all devices types on the bus to look indistinguishable. SCSI peripherals are used on all types of operating systems.


The SCSI-1 standard supports asynchronous and synchronous data transfers. Synchronous transfer rates are up to 3-MBytes per second.

The SCSI-1 standard was quite broad, and allowed many “vendor specific” options. Implementations varied and vendor compatibility problems arose. The Common Command Set (CCS) was developed to resolve these problems.


SCSI-2 was released in 1993 and has several improvements:

·         Transfer rates increased to 10 MHz

·         Lower overhead

·         Improved functionality

·         Improved reliability

Differential SCSI-2

SCSI 2 uses differential signaling, increasing buss performance by:

Reducing noise

Increases maximum buss length

Increases maximum bit rate

Fast/Narrow SCSI-2

This is synchronous and operates at 10 Mbytes per second.

Fast/Wide SCSI-2

This uses a 32-bit parallel data path and is used in disk arrays.


The SCSI 3 standard has some further improvements:

Burst rates of 40-MBps

Increased separation

Expanded address

Higher reliability and robustness

SCSI 3 has both a parallel and serial version.

2.9.3    Analog Ports

Game Port

The original joystick on the PC was analog, composed of two buttons and two 100 KW linear potentiometers.




Regular Port

Midi Enabled Port


5 Vdc

5 Vdc


Joystick A, Button 1

Joystick A, Button 1


Joystick A, X Axis

Joystick A, X Axis








Joystick A, Y Axis

Joystick A, Y Axis


Joystick A, Button 2

Joystick A, Button 2


+ 5 Vdc

+ 5 Vdc


+ 5 Vdc

+ 5 Vdc


Joystick B, Button 1

Joystick B, Button 1


Joystick B, X Axis

Joystick B, X Axis





Joystick B, Y Axis

Joystick B, Y Axis


Joystick B, Button 2

Joystick B, Button 2


+ 5 Vdc



A digital joystick generally sends a 2.5 volts signal when no buttons are pressed, go to 0 volts when the up or left button is pressed and 5 volts when the down or right button is pressed.

Buttons signals ‑ These pins are TTL level signals, they are normally high at logical level 1 and when a button is pressed they go low to logical level 0.


This is short for super or separate video and is sometimes called Y/C video.

When a video signal is broadcast, the chromance and luminance are combined into a signal called composite video. However, when video signals are recorded on tape, or displayed on an output device, they are separated.

By keeping the chromance and luminance components separate, S-video reduces the amount of signal processing required and produces a sharper image.

Computer monitors are designed for RGB signals. Most digital video devices, such as digital cameras and game machines, produce video in RGB format.

Enhanced Video Connector

The EVC connector supports a video bandwidth of 2GHz on RGB and clock lines.

VESA EVC Pin Out Standard






Red Video Out


USB + Data


Green Video Out


USB – Data


Pixel Clock Out


USB/1394 Common Shield


Blue Video Out


1394 Vg


Common Ground Return


1394 Vp


Right Audio Out


Left Audio In


Left Audio Out


Right Audio In


Audio Out Return


Audio In Return


Sync Return


Stereo Sync


Horizontal Sync


DDC Return


Vertical Sync


DDC Data




DDC Clock




+ 5 V DC


1394 Pair A - Data


1394 Pair B + Clock


1394 Pair A + Data


1394 Pair B - Clock










Y or Composite Video In




Video In return




Chroma Video In




For further information check out

2.9.4    AGP Interface

Take the AGP Tutorial by Intel


AGP Interface Spec by Intel

AGP Pro. Spec. by Intel

AGP Mechanical Spec. by Intel

AGP Design Guide by Intel


AGP (Accelerated Graphics Port) was developed by Intel to speed up 3-D graphics. It creates a dedicated channel between the graphics controller and main memory thud bypassing the PCI bus.

2.10   File Types

File formats




PNG Format


GIF89a Specification


2.10.1  MIME

In 1992, the IETF† created a standard called MIME† that allows non-text data to be sent over e-mail. Some of the file types it supports are word processing documents, PostScript, graphic, binary files, video, and voice messages.

The various MIME types includes[8]:




Plain, Richtext, Enriched


Mixed, Alternative, Digest, Parallel, Appledouble


RFC822, Partial, External-body


Octet-stream, PostScript, SGML






MPEG, QuickTime


Various computer platforms have developed their own file formats to store audio and video files.

RIFF† files contains code words that describe the file type such as audio, still image, or video. It also defines how the data is organized. Each of these self-describing structures is known as a chunk. A single file can contain several different data types or chunks.

Some of the file types, which have been based on the RIFF format, include: PAL†, RDIB†, RMID†, RMMP†, and WAVE†.

2.10.2  Graphic or Image Formats


The BMP† image is native to the Windows platform, and is device independent. A bit mapped file can be up to 4 Gbytes long.


The GIF† was developed by CompuServe as a protocol for interchanging graphic data is independent of the display hardware.

GIF is defined in terms of blocks and sub-blocks, which contain parameters and data used to reproduce a graphic. It utilizes color tables to render raster-based graphics and can support 256 colors.

There are three groups of blocks:

·         Control

·         Header – this is 6 bytes long and identifies the GIF version

·         Logical screen descriptor – this describes the size of the image in pixels

·         Color table blocks – these can be local or global. Global tables identify the color for all graphics in the file while local tables override a specific image.

·         Image descriptor – identifies the specific characteristics of the image such as its size, interleaving and type of color tables.

·         Trailer – a single byte that signifies the end of the file.

·         Graphic-rendering

·         Special purpose

GIF files do not include any error detection or correction capabilities.

GIF files use Lempel-Ziv or LZW compression. This is a variable length, proprietary technique currently licensed by Unysys.


The PNG† format was developed when Unisys decided to collect royalties on GIF. For more information go to


EPS† files contain commands written in the postscript language. These can print text, select fonts, make simple drawings and paint bit images. The encapsulation refers to a box, which defines the image limits on a page.


The PCD† format stores images based on the Kodak PhotoCD system. It is capable of storing an image at several different resolutions. A base image is used as a reference. Higher resolutions are stored as difference signals from the reference. Each pixel has a 12-bit resolution.


This format was developed by ZSoft for its PC Paintbrush program. It provides image compression that can be used on any size image with unlimited colors.


The TGA† family of formats was developed by TrueVision for a variety of file types.

2.10.3  Audio Formats

Overview of Digital Audio Interface Data Structures by Cirrus Logic

A Tutorial on MIDI & Wavetable Music Synthesis by Cirrus Logic


There are two basic types of sound reproduction, FM synthesis and wave table synthesis.

MIDI† Format

This encoding scheme supports up to 16 signal channels and is used primarily for music synthesizers. The basic synthesizer can play 6 melodic notes and 2 percussive notes simultaneously. Extended synthesizers can play 9 melodic timbres and 8 percussive timbres simultaneously.

This technique is roughly 100 times more compact than the WAV format. The quality of sound reproduction is dependent on the sound synthesizer at the receiving end.

WAVE Format

The wav extension refers to waveform audio. The actual sound has been digitized and stored into memory. It is the simply a matter of looking up the sound in a look up table. There are a number of proprietary ways of doing this. Some of the more prominent manufacturers include Roland Sound, Creative Labs, Turtle Beach, Aria, Gravis, and Ensoniq Soundscape.

MP3 Format

MP3 refers to MPEG Layer-3. It uses a perceptual audio coding technique, to achieve high audio compression without significant loss of quality.

The MP3 algorithm is based on a psycho-acoustic model. It eliminates frequencies that the ear is unable to perceive and can achieve compression rates of 1:12.

A typical 1 Mbyte MP3 file can contain one minute of music or several minutes of spoken words. It is often used for CD quality audio files. The average CD quality song requires 3 - 6 Mbytes.

2.10.4  Video Formats

AVI Format

AVI Overview by John McGowan


The AVI† file format was developed by Microsoft to play movies with Video for Windows.


MOV† files are cross platform compatible. They are used by Quick Time on Macintosh and Windows platforms.

2.11   Video for Windows


VfW is a video extension [.avi†] that is bundled with Microsoft Windows. The principle component, Media Player, allows audio, video, and animation to be imbedded in other applications. Consequently, avi files interleave the audio and video into one data stream in order to preserve synchronization.

If the video compression is performed in software only, either the quality or the frame rate suffers. Therefore, most production video clips use hardware to perform the video compression. There are 4 compression algorithms that come with Video for Windows:

·         Video 1 – this is the default compression algorithm. It supports 8 and 16-bit video, and adjusts the playback quality to suit the hardware.

·         Microsoft RLE Compressor – This is used on only older machines and supports 8 bit video with run-length encoding.

·         Intel Indeo Compressor – Used for 24 bit video and requires hardware to perform real-time compression.

·         SuperMac CinePak Compressor – This is a 24-bit format that cannot be compressed in real-time, but produces high quality 320x240 resolution at 15 frames per second.

2.12   Plug and Play

The ACPI† is being developed to shift some of the features usually performed in the BIOS to the operating system.

Plug and Play Operation of the PCI Local Bus by Data Acquisition

ACPI White Paper by Compaq


Assignment Questions


Quick Quiz

1.     For multimedia applications, such as desktop conferencing, it [is, is not] important to synchronized audio and video.

2.     In multimedia communications [audio, video] is given a higher priority.

3.     The [PCI, MCA] bus uses central arbitration.

4.     Micro Channel supports up to [4, 8, 16] plug-in boards on the bus.

5.     A PCI bus can support [4, 8, 16] plug-in boards when running at 33 MHz

6.     A PCI-to-PCI bridge [can, cannot] be added to support more boards.

7.     The typical NTSC TV image has about __________ invisible lines.

8.     The viewing angle of plasma displays is quite [high, low].

9.     The horizontal synchronization rate is about [12.75, 15.75, 25.25] KHz.

10.   The vertical synchronization rate is about [30, 60, 90] Hz.

11.   A typical horizontal line contains [53.5, 73.5] ΅sec of luminance signal.

12.   The horizontal blanking pulse is about [5, 10, 15] ΅sec long.

13.   The horizontal synchronization pulse is about [3.2, 4.1, 5.4] ΅sec long.

14.   Since the modem was an external device in the past, the standard PC communicates via a UART and serial port. [True, False]

15.   V.80 modems are designed to operate at 33.6 Kbps and support real time audio and video. [True, False]

16.   To achieve a maximum rate of 56 Kbps, one end of a PSTN connection must terminate at a digital circuit. [True, False]

17.   A PC audio codec has a sampling rate of 44.1 KHz, and a resolution of 16 bits per sample. [True, False]

18.   CD-ROM speeds are specified in integer multiples of [150, 250] Kbps.

19.   PC based CD-ROMs utilize [CLV, CAV].

20.   CD-ROM tracks [are, are not] self-clocking.

21.   A logical mark on a CD corresponds to a land. [True, False]

22.   A DVD can contain [4, 6, 8] sound tracks.

23.   A DVD [can, cannot] store movies in multiple formats.

24.   The MMX processor adds 57 new instructions to accelerate various multimedia functions. [True, False]

25    MMX instructions speed-up 3D graphics. [True, False]

26.   The PS2 port was developed by IBM for keyboards only. [True, False]

27.   The RS-232 interface is hardly used anymore. [True, False]

28.   The IrDA interface was not intended for portable devices such as notebook computers. [True, False]

29.   USB supports isochronous, bulk, interrupt and control data transfer types. [True, False]

30.   FireWire can be implemented as a backplane or a point-to-point cable. [True, False]

31.   FireWire supports arbitrated asynchronous but not isochronous data. [True, False]

32.   Each IEEE 1394 bus segment can connect [15, 31, 63] devices.

33.   A simple parallel port can send data at 115 Kbps. [True, False]

Analytical Questions

1.     Sketch in detail, the frequency domain characteristics of the baseband of an NTSC broadcast video signal, and explain its components in terms of its time domain elements.

Composition Questions

1.     Give 3 reasons for the aluminized layer on the phosphor screen in a CRT.

2.     Give 2 reasons for interlacing.

3.     What is the purpose of the shadow mask on a color CRT?

4.     Why do oscilloscopes use electrostatic deflection but television sets use magnetic deflection?

5.     What is the purpose of the blanking pulse?

6.     Make a sketch of one line of composite video, and label all of its components.

7.     Sketch and label the H sync components in the frequency domain.

8.     Explain the CIRC process used on a CD.

9.     Find any web sites devoted to micro channel.

10.   What do you think is the “killer app” for multimedia, and why?

For Further Research


Practical Digital Video with Programming Examples in C, Phillip E. Mattison, Wiley, 1994

Multimedia Bible, Winn L. Rosch, Sams Publishing

The McGraw-Hill Multimedia Handbook, Jessica Keyes


Computers & Microprocessors


Computer Architecture


Movie-2 Bus


Digital TV




56K Modems


Audio Data Structures












Synchronized Multimedia


[1]       Software Publishers Association

[2]       Interactive Multimedia Special Interest Group

†       Industry Standard Architecture

†       Micro Channel Architecture

†       Enhanced Industry Standard Architecture

†       VESA Local Bus

†       Peripheral Component Interconnect

†       Special Interest Group

[4]       Television Engineering Handbook, K. Blair Benson, ed., FIG. 13-13

[5]       Based on Fig 18.15, Electronic Communication, Roddy & Coolen

[6]       Based on Fig 18-16, Electronic Communication, Roddy & Coolen

[7]       Editorial in Laser Focus World, vol.31 no.11, November 1995

†       Data Access Arrangement

†       Carrierless AM/PM

†       Discrete Multitone

†       Orthogonal Frequency Division Multiplexing

†       Coded OFDM

†       Digital Audio Broadcast

†       Eight to Fourteen Modulation

†       Cross Interleave Reed-Solomon Coding

†       Single Instruction, Multiple Data

†       Electronic Industries Association

†       Extended Capabilities Port

†       Small Computer System Interface

†       Internet Engineering Task Force

†       Multipurpose Internet Mail Extensions

[8]       MIME Interoperability: Sharing Files Via E-mail, Network Computing, vol.7, no. 6, April 15, 1996

†       Resource Interchange File Format

†       PALette format

†       R Device Independent Bitmap

†       R MIDi

†       R Multimedia Movie P

†       WAVEform audio

†       Bit MaPped

†       Graphics Interchange Format

†       Portable Network Graphic

†       Encapsulate PostScript

†       PhotoCD

†       TarGA

†       Music Industry Digital Interface

†       Audio Video Interleaved

†       MOVies

†       Audio-Video Interleaved

†       Advanced Configuration and Power Interface