hand out sistem informatika

116
1 UPI Hand Out Mata Kuliah SISTEM INFORMASI MULTIMEDIA (EK.351) Oleh: Enjang Akhmad Juanda NIP: 130896534 Pendidikan Teknik Elektro Fakultas Pendidikan Teknologi dan Kejuruan Universitas Pendidikan Indonesia Jl.Dr.Setiabudhi 207 Bandung www.upi.edu

Upload: gunawan-eda

Post on 25-Dec-2015

6 views

Category:

Documents


1 download

DESCRIPTION

Hand Out Sistem Informatika

TRANSCRIPT

Page 1: Hand Out Sistem Informatika

1  

 

 

 

UPI 

Hand Out 

Mata Kuliah  

SISTEM INFORMASI MULTIMEDIA 

(EK.351) 

 

Oleh: 

Enjang Akhmad Juanda 

NIP: 130896534 

 

 

 

 

Pendidikan Teknik Elektro 

Fakultas Pendidikan Teknologi dan Kejuruan  

Universitas Pendidikan Indonesia Jl.Dr.Setiabudhi 207 Bandung 

www.upi.edu 

Page 2: Hand Out Sistem Informatika

2  

SISTEM INFORMASI MULTIMEDIA 

DESKRIPSI DAN SILABUS      

Deskripsi Mata Kuliah: EK.351. Sistem Informasi Multimedia: S1, 2 SKS, Semester 7 Mata kuliah ini bertujuan agar mahasiswa setelah dan selama mengikuti kuliah ini memahami apa itu Multimedia dan Sistem Informasi Multimedia, apa manfaat dan mengapa sistem Multimedia. Kemudian dibahas standar‐standar dan teknologi dibalik itu, arsitektur dan untung‐ruginya. Selanjutnya tentang karakteristik sinyal, dan pengolahan sinyal terutama terkait multimedia.Terakhir adalah analisis dan perancangan sistem multimedia. Penyajian diusahakan secara kontekstual, wawasan teori dikombinasikan dengan wawasan praktis. Kelengkapan dan media pembelajaran digunakan papan tulis dengan kelengkapannya, OHP, Digital Projector (misalnya InFocus, dll), alat demo,dll. Evaluasi diintegrasikan dari: kehadiran, aktivitas, sikap dan kognisi, penyelesaian tugas‐tugas, presentasi dengan pertahanannya dan UTS serta UAS. 

   

Silabus Mata Kuliah: 

1. Identitas Mata Kuliah 

                        Mata Kuliah                      : Sistem Informasi Multimedia 

           Nomor Kode                      : EK. 351 

           Jumlah SKS                        : 2 SKS 

                        Semester              : VII 

                       Kelompok Mata Kuliah   : MKBS 

                       ProgramStudi/ Program : Pendidikan Teknik Elektro/S1 

                       Status Mata Kuliah           : Wajib 

           Prasyarat              : 1. Dasar Teknik Elektro                                                   2. Teknik Digital 

                     3. Dasar Komputer 

                     4. Dasar Pemrograman 

                     5. Rangkaian Elektrik I 

                       Dosen              :  Dr. Enjang A. Juanda M.Pd.,MT.  

 

Page 3: Hand Out Sistem Informatika

3  

2.Tujuan 

Setelah selesai mengikuti mata kuliah ini mahasiswa diharapkan mampu menjelaskan dan sedapat mungkin mempraktekkan tentang sistem informasi multimedia. Teknik dan analisanya serta aplikasi juga pengembangannya di dunia nyata/masyarakat. 

 

3. Deskripsi Isi 

Pada mata kuliah ini dibahas definisinya, kelengkapan, teknik‐teknik, rekayasa, analisis, aplikasi dan pengembangan dari aspek‐aspek di atas pada Sistem Informasi Multimedia secara kontekstual. 

 

4. Pendekatan Pembelajaran 

Ekspositori dan Inkuiri. 

‐ Metode : Ceramah, Tanya Jawab, Diskusi dan Pemecahan Masalah, Analisa Kasus.  

‐ Tugas   : Presentasi, Pembuatan Makalah dan Eksplorasi Sumber via Internet 

‐ Media  : UHP, LCD/Power Point. 

 

5. Evaluasi 

‐ Kehadiran 

‐ Tugas Presentasi dan diskusi 

‐ Makalah 

‐ UTS 

‐ UAS 

 

6. Rincian materi kuliah tiap pertemuan 

I). Membahas silabus perkuliahan dan mengakomodasikan berbagai masukan dari mahasiswa untuk memberi kemungkinan revisi terhadap pokok bahasan yang dianggap tidak penting dan memasukkan pokok bahasan yang dianggap penting. Sesuai dengan apa yang dikemukakan dalam silabus, pada pertemuan ini dikemukakan pula tujuan, ruang lingkup, prosedur perkuliahan, penjelasan tentang tugas yang harus dilakukan mahasiswa, ujian yang harus diikuti termasuk jenis soal dan cara menyelesaikan/ 

Page 4: Hand Out Sistem Informatika

4  

menjawab pertanyaan, dan sumber‐sumber. Terakhir, menyampaikan uraian pendahuluan tentang multimedia dan sistem informasi multimedia. 

II). Pengertian dan definisi‐definisi, Sistem global multimedia  

III). Ragam/Tipe serta Komponen‐komponen Multimedia 

IV). Implementasi Multimedia pada Industri dan Profesional 

V). Implementasi Multimedia pada  masyarakat 

VI). Standar dan Teknologi multimedia 1 

VII). Standar dan Teknologi multimedia 2 

VIII). UTS. 

IX). Arsitektur dan untung‐ruginya pada Informasi Multimedia 

X). Karakteristik sinyal secara umum dan khusus/ terkait 1 

XI). Karakteristik sinyal secara umum dan khusus/ terkait 2 

XII). Analisis dan Perancangan Sistem Multimedia  

XIII). Analisis dan Perancangan Sistem Informasi Multimedia  

XIV). Studi Kasus 1 

XV). Studi Kasus 2 dan Refleksinya 

XVI). UAS 

7. Daftar Literatur Sumber Utama: 

1. Raghavan,S.V., dan Tripathi, Satish K., Networked Multimedia Systems, Concepts, Architectures, and Design , Prentice Hall, New Jersey,1998. 

2. Held,Gilbert, Voice and Data Internetworking, Mc.GrawHill, New York, 2000 3. Schaphorst, Richard, Videoconferencing and Videotelephony, Technology and Standards, 

Artech HouseInc., Boston, 1996.   

1. Referensi: - Jurnal-jurnal Terkait.        1. IEEE, Telecommunication Transactions. 

      2. IEEE, Multimedia Transactions. 

‐ Internet 

Page 5: Hand Out Sistem Informatika

5  

Dosen dapat dihubungi melalui: 

1. Alamat rumah dan telpon: Jl. Suryalaya IX No.31 Bandung 40265‐ T.7310350 2. Alamat e‐mail: [email protected] 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Page 6: Hand Out Sistem Informatika

6  

Apa dan Mengapa  Multimedia? 

• Komputer muncul +/‐ 4 dekade tahun yl. 

• Mula‐mula komputer untuk komputasi pendukung kegiatan ilmiah  

• Sekarang komputer berada dan digunakan dimana‐mana. Komputer sudah dapat meniru banyak sifat manusia: melihat, mendengar, merasa, mengingat, berfikir, dll. Yang terakhir ini terkait dengan kemampuan/ kemunculan multimedia. 

FUNGSI KOMPUTER SECARA UMUM 

• Adalah memproses informasi  

• Informasi berupa/yang berkaitan dengan: suara, rasa/sentuhan, penglihatan, penciuman, dll. Semua perlu tangible (di sini:dapat di tangkap secara elektronis). 

• Semua dapat disimpan dalam besaran dijital,rentetan 0‐1.  Semua menyangkut penangkapan,penyimpanan,pengkodean dan penampilan dari besaran‐besaran generik dijital tersebut. 

Lingkungan Komputasi: 

Page 7: Hand Out Sistem Informatika

7  

 

Dengan lingkungan komputasi (multimedia) sekarang ini, maka komputer dengan jaringannya semakin memiliki kekuatan dalam banyak hal. Video, Audio dan Data secara terintegrasi dalam kemasan Informasi adalah atribut dari Multimedia. 

Page 8: Hand Out Sistem Informatika

8  

 

 

Page 9: Hand Out Sistem Informatika

9  

 

 

Page 10: Hand Out Sistem Informatika

10  

 

 

Page 11: Hand Out Sistem Informatika

11  

 What is a decibel?

And what are the different types of decibel measurement: dB, dBA, dBC, dBV, dBm and dBi? How are they related to loudness, to phons and to sones? This page describes and compares them all and gives sound file examples. A related page allows you to measure your hearing response and to compare with standard hearing curves. 

 

Page 12: Hand Out Sistem Informatika

12  

 

 

 

Page 13: Hand Out Sistem Informatika

13  

 Definition and examples The decibel (dB) is used to measure sound level, but it is also widely used in electronics, signals and communication. The dB is a logarithmic unit used to describe a ratio. The ratio may be power, sound pressure, voltage or intensity or several other things. Later on we relate dB to the phon and the sone (units related to loudness). But first, to get a taste for logarithmic units, let's look at some numbers. (If you have forgotten, go to What is a logarithm?)  

For instance, suppose we have two loudspeakers, the first playing a sound with power P1, and another playing a louder version of the same sound with power P2, but everything else (how far away, frequency) kept the same.

The difference in decibels between the two is defined to be

10 log (P2/P1) dB where the log is to base 10.

If the second produces twice as much power than the first, the difference in dB is  

10 log (P2/P1) = 10 log 2 = 3 dB.

If the second had 10 times the power of the first, the difference in dB would be  

10 log (P2/P1)= 10 log 10 = 10 dB.

If the second had a million times the power of the first, the difference in dB would be  

Page 14: Hand Out Sistem Informatika

14  

10 log (P2/P1) = 10 log 1000000 = 60 dB.

This example shows one feature of decibel scales that is useful in discussing sound: they can describe very big ratios using numbers of modest size. But note that the decibel describes a ratio: so far we have not said what power either of the speakers radiates, only the ratio of powers. (Note also the factor 10 in the definition, which puts the 'deci' in decibel).

Sound pressure, sound level and dB. Sound is usually measured with microphones and they respond (approximately) proportionally to the sound pressure, p. Now the power in a sound wave, all else equal, goes as the square of the pressure. (Similarly, electrical power in a resistor goes as the square of the voltage.) The log of the square of x is just 2  log  x, so this introduces a factor of 2 when we convert to decibels for pressures. The difference in sound pressure level between two sounds with p1 and p2 is therefore:  

20 log (p2/p1) dB = 10 log (p22/p1

2) dB = 10 log (P2/P1) dB where again the log is to base 10.

Sound files to show the size of a decibel What happens when you halve the sound power? The log of 2 is 0.3, so the log of 1/2 is −0.3. So, if you halve the power, you reduce the power and the sound level by 3 dB. Halve it again (down to 1/4 of the original power) and you reduce the level by another 3  dB. That is exactly what we have done in the first graphic and sound file below. 

The first sample of sound is white noise (a mix of all audible frequencies, just as white light is a mix of all visible frequencies). The second sample is the same noise, with the voltage reduced by a factor of the square root of 2. The reciprocal of the square root of 2 is approximately 0.7, so −3 dB corresponds to reducing the voltage or the pressure to 70% of its original value. The green line shows the voltage as a function of time. The red line shows a continuous exponential decay with time. Note that the voltage falls by 50% for every second sample.

Note, too, that a doubling of the power does not make a huge difference to the loudness. We'll discuss this further below, but it's a useful thing to remember when choosing sound reproduction equipment.  

How big is a decibel? In the next series, successive samples are reduced by just one decibel.

One decibel is close to the Just Noticeable Difference (JND) for sound level. As you listen to these files, you will notice that the last is quieter than the first, but it is rather less clear to the ear that the second of any pair is quieter than its predecessor. 10*log10(1.26) = 1, so to increase the sound level by 1 dB, the power must be increased by 26%, or the voltage by 12%.

What  if  the difference  is  less  than a decibel? Sound  levels are  rarely given with decimal places. The reason  is  that  sound  levels  that differ by  less  than 1 dB are hard  to distinguish, as  the next example shows. 

ASUS A6
Highlight
ASUS A6
Rectangle
ASUS A6
Line
ASUS A6
Line
Page 15: Hand Out Sistem Informatika

15  

You may notice that the last is quieter than the first, but it is difficult to notice the difference between successive pairs. 10*log10(1.07) = 0.3, so to increase the sound level by 0.3 dB, the power must be increased by 7%, or the voltage by 3.5%.

Standard reference levels ("absolute" sound level) When the decibel is used to give the sound level for a single sound rather than a ratio, then a reference level must be chosen. For sound intensity, the reference level (for air) is usually chosen as 20 micropascals, or 0.02 mPa. (This is very low: it is 2 ten billionths of an atmosphere. Nevertheless, this is about the limit of sensitivity of the human ear, in its most sensitive range of frequency. Usually this sensitivity is only found in rather young people or in people who have not been exposed to loud music or other loud noises. Personal music systems with in‐ear speakers ('walkmans') are capable of very high sound levels in the ear, and are believed by some to be responsible for much of the hearing loss in young adults in developed countries.)  

So if you read of a sound pressure level of 86 dB, it means that

20 log (p2/p1) = 86 dB

where p1 is the sound pressure of the reference level, and p2 that of the sound in question. Divide both sides by 20:  

log (p2/p1) = 4.3

p2/p1 = 104.3

4 is the log of 10 thousand, 0.3 is the log of 2, so this sound has a sound pressure 20 thousand times greater than that of the reference level (p2/p1 = 20,000). 86 dB is a loud but not dangerous level of sound, if it is not maintained for very long.  

What does 0 dB mean? This level occurs when the measured intensity is equal to the reference level. i.e., it is the sound level corresponding to 0.02 mPa. In this case we have

sound level = 20 log (pmeasured/preference) = 20 log 1 = 0 dB

Remember that decibels measure a ratio. 0 dB occurs when you take the log of a ratio of 2. So 0 dB does not mean no sound, it means a sound level where the sound pressure is equal to that of the reference level. This is a small pressure, but not zero. It is also possible to have negative sound levels: ‐ 20 dB would mean a sound with pressure 10 times smaller than the reference pressure, ie 2 micropascals.  

Not all sound pressures are equally loud. This is because the human ear does not respond equally to all frequencies: we are much more sensitive to sounds in the frequency range about 1 kHz to 4 kHz (1000 to 4000 vibrations per second) than to very low or high frequency sounds. For this reason, sound meters are usually fitted with a filter whose

ASUS A6
Line
ASUS A6
Line
ASUS A6
Line
ASUS A6
Rectangle
ASUS A6
Line
ASUS A6
Line
Page 16: Hand Out Sistem Informatika

16  

response to frequency is a bit like that of the human ear. (More about these filters below.) If the "A weighting filter" is used, the sound pressure level is given in units of dB(A) or dBA. Sound pressure level on the dBA scale is easy to measure and is therefore widely used. It is still different from loudness, however, because the filter does not respond in quite the same way as the ear. To determine the loudness of a sound, one needs to consult some curves representing the frequency response of the human ear, given below. (Alternatively, you can measure your own hearing response.)

Logarithmic response, psychophysical measures, sones and phons Why do we use decibels? The ear is capable of hearing a very large range of sounds: the ratio of the sound pressure that causes permanent damage from short exposure to the limit that (undamaged) ears can hear is more than a million. To deal with such a range, logarithmic units are useful: the log of a million is 6, so this ratio represents a difference of 120 dB. Psychologists also say that our sense of hearing is roughly logarithmic (see under sones below). In other words, they think that you have to increase the sound intensity by the same factor to have the same increase in loudness. Whether you agree or not is up to you, because this is a rather subjective question. (Listen to the sound files linked above.)  

The filters used for dBA and dBC The most widely used sound level filter is the A scale, which roughly corresponds to the inverse of the 40 dB (at 1 kHz) equal‐loudness curve. Using this filter, the sound level meter is thus less sensitive to very high and very low frequencies. Measurements made on this scale are expressed as dBA. The C scale is practically linear over several octaves and is thus suitable for subjective measurements only for very high sound levels. Measurements made on this scale are expressed as dBC. There is also a (rarely used) B weighting scale, intermediate between A and C. The figure below shows the response of the A filter (left) and C filter, with gains in dB given with respect to 1 kHz. (For an introduction to filters, see RC filters, integrators and differentiators.)  

Page 17: Hand Out Sistem Informatika

17  

On the music acoustics and speech acoustics sites, we plot the sound spectra in dB. The reason for this common practice is that the range of measured sound pressures is large.

Loudness, phons and sones The phon is a unit that is related to dB by the psychophysically measured frequency response of the ear. At 1 kHz, readings in phons and dB are, by definition, the same. For all other frequencies, the phon scale is determined by the results of experiments in which volunteers were asked to adjust the loudness of a signal at a given frequency until they judged its loudness to equal that of a 1 kHz signal. To convert from dB to phons, you need a graph of such results. Such a graph depends on sound level: it becomes flatter at high sound levels.  

Page 18: Hand Out Sistem Informatika

18  

 

Curves of equal loudness determined experimentally by Robinson & Dadson in 1956, following the original work of Fletcher & Munson (Fletcher, H. and Munson, W.A. (1933) J.Acoust.Soc.Am. 6:59;

Robinson, D.W. and Dadson, R.S.(1956) Br. J. Appl. Phys. 7:166. Plots of equal loudness as a function of frequency are often generically called Fletcher-Munson curves.)

The sone is derived from psychophysical measurements which involved volunteers adjusting sounds until they judge them to be twice as loud. This allows one to relate perceived loudness to phons. A sone is defined to be equal to 40 phons. Experimentally it was found that a 10 dB increase in sound level corresponds approximately to a perceived doubling of loudness. So that approximation is used in the definition of the phon: 0.5 sone = 30 phon, 1 sone = 40 phon, 2 sone = 50 phon, 4 sone = 60 phon, etc.

 

Wouldn't it be great to be able to convert from dB (which can be measured by an instrument) to sones (which approximate loudness as perceived by people)? This is usually done using tables that you can find in acoustics handbooks. However, if you don't mind a rather crude approximation, you can say that the A weighting curve approximates the human frequency response at low to moderate sound levels, so dBA is very roughly the same as phons. Then use the logarithmic relation between sones and phons described above.

Page 19: Hand Out Sistem Informatika

19  

Recording level and decibels Meters measuring recording or output level on audio electronic gear (mixing consoles etc) are almost always recording the AC rms voltage (see links to find out about AC and rms). For a given resistor R, the power P is V2/R, so  

difference in voltage level = 20 log (V2/V1) dB   =  10 log (V22/V1

2) dB   = 10 log (P2/P1) dB,    or  

absolute voltage level = 20 log (V/Vref)

where Vref is a reference voltage. So what is the reference voltage?  

The obvious level to choose is one volt rms, and in this case the level is written as dBV. This is rational, and also convenient with modern analog-digital cards whose maximum range is often about one volt rms. So one has to remember to the keep the level in negative dBV (less than one volt) to avoid clipping the peaks of the signal, but not too negative (so your signal is still much bigger than the background noise).

Sometimes you will see dBm. This used to mean decibels of electrical power, with respect to one milliwatt, and sometimes it still does. However, it's complicated for historical reasons. In the mid twentieth century, many audio lines had a nominal impedance of 600 Ω. If the impedance is purely resisitive, and if you set V2/600 Ω = 1 mW, then you get V = 0.775 volts. So, providing you were using a 600 Ω load, 1 mW of power was 0 dBm was 0.775 V, and so you calibrated your level meters thus. The problem arose because, once a level meter that measures voltage is calibrated like this, it will read 0 dBm at 0.775 V even if it is not connected to 600 Ω So, perhaps illogically, dBm will sometimes mean dB with respect to 0.775 V. (When I was a boy, calculators were expensive so I used dad's old slide rule, which had the factor 0.775 marked on the cursor window to facilitate such calculations.)

How to convert dBV or dBm into dB of sound level? There is no simple way. It depends on how you convert the electrical power into sound power. Even if your electrical signal is connected directly to a loudspeaker, the conversion will depend on the efficiency and impedance of your loudspeaker. And of course there may be a power amplifier, and various acoustic complications between where you measure the dBV on the mixing desk and where your ears are in the sound field.

Page 20: Hand Out Sistem Informatika

20  

Intensity, radiation and dB How does sound level (or radio signal level, etc) depend on distance from the source?  

A source tht emits radiation equally in all directions is called isotropic. Consider an isolated source of sound, far from any reflecting surfaces -- perhaps a bird singing high in the air. Imagine a sphere with radius r, centred on the source. The source outputs a total power P, continuously. This sound power spreads out and is passing through the surface of the sphere. If the source is isotropic, the intensity I is the same everywhere on this surface, by definition. The intensity I is defined as the power per unit area. The surface area of the sphere is 4πr2, so the power (in our example, the sound power) passing through each square metre of surface is, by definition:

I = P/4πr2.  

So we see that, for an isotropic source, intensity is inversely proportional to the square of the distance away from the source:  

I2/I1 = r12/r2

2.  

But intensity is proportional to the square of the sound pressure, so we could equally write:  

p2/p1 = r1/r2.  

So, if we double the distance, we reduce the sound pressure by a factor of 2 and the intensity by a factor of 4: in other words, we reduce the sound level by 6 dB. If we increase r by a factor of 10, we decrease the level by 20 dB, etc.  

Be warned, however, that many sources are not isotropic, especially if the wavelength is smaller than, or of a size comparable with the source. Further, reflections are often quite important, especially if the ground is nearby, or if you are indoors.

dBi and radiation that varies with direction Radiation that varies in direction is called anisotropic. For many cases in communication, isotropic radiation is wasteful: why emit a substantial fraction of power upwards if the receiver is, like you, relatively close to ground level. For sound of short wavelength (including most of the important range for speech), a megaphone can help make your voice more anisotropic. For radio, a wide range of designs allows antennae to be highly anisotropic for both transmission and reception.  

So, when you interested in emission in (or reception from) a particular direction, you want the ratio of intensity measured in that direction, at a given distance, to be higher

Page 21: Hand Out Sistem Informatika

21  

than that measured at the same distance from an isotropic radiator (or received by an isotropic receiver). This ratio is called the gain; express the ratio in dB and you have the gain in dBi for that radiator. This unit is mainly used for antennae, either transmitting and receiving, but it is sometimes used for sound sources (and directional microphones).

Example problems A few people have written asking for examples in using dB in calculations. So...  

• All else equal, how much louder is loudspeaker driven (in its linear range) by a 100 W amplifier than by a 10 W amplifier?  

The powers differ by a factor of ten, which, as we saw above, is 10 dB. All else equal here means that the frequency responses are equal and that the same input signal is used, etc. So the frequency dependence should be the same. 10 dB corresponds to 10 phons. To get a perceived doubling of loudness, you need an increase of 10 phons. So the speaker driven by the 100 W amplifier is twice as loud as when driven by the 10 W, assuming you stay in the linear range and don't distort or destroy the speaker. (The 100 W amplifier produces twice as many sones as does the 10 W.)

• If, in ideal quiet conditions, a young person can hear a 1 kHz tone at 0 dB emitted by a loudspeaker (perhaps a softspeaker?), by how much must the power of the loudspeaker be increased to raise the sound to 110 dB (a dangerously loud but survivable level)?  

The difference in decibels between the two signals of power P2 and P1 is defined above to be

ΔL = 10 log (P2/P1) dB   so, raising 10 to the power of these two equal quantities:  10L/10 = P2/P1   so: P2/P1 = 10

110/10 = 1011  = one hundred thousand million.  

which is a demonstration that the human ear has a remarkably large dynamic range, perhaps 100 times greater than that of the eye.  

• An amplifier has an input of 10 mV and and output of 2 V. What is its voltage gain in dB?  

Voltage, like pressure, appears squared in expressions for power or intensity. (The power dissipated in a resistor R is V2/R.) So, by convention, we define:

gain = 20 log (Vout/Vin)       = 20 log (2V/10mV)      = 46 dB  

(In the acoustic cases given above, we saw that the pressure ratio, expressed in dB, was the same as the power ratio: that was the reason for the factor 20 when defining dB for pressure. It is worth noting that, in the voltage gain example, the power gain of the ampifier is unlikely to equal the voltage gain. The power is proportional to the square of the voltage in a given resistor. However, the input and output impedances of

Page 22: Hand Out Sistem Informatika

22  

amplifiers are often quite different. For instance, a buffer amplifier or emitter follower has a voltage gain of about 1, but a large current gain.)

What is a logarithm? A brief introduction. First let's look at exponents. If we write 102 or 103 , we mean  

102 = 10*10 = 100   and    103 = 10*10*10 = 1000.  

So the exponent (2 or 3 in our example) tells us how many times to multiply the base (10 in our example) by itself. For this page, we only need logarithms to base 10, so that's all we'll discuss. In these examples, 2 is the log of 100, and 3 is the log of 1000. If we multiply ten by itself only once, we get 10, so 1 is the log of 10, or in other words  

101 = 10.  

We can also have negative logarithms. When we write 10−2 we mean 0.01, which is 1/100, so  

10−n = 1/10n  

Let's go one step more complicated. Let's work out the value of (102)3. This is easy enough to do, one step at a time:  

(102)3 = (100)3 = 100*100*100 = 1,000,000 = 106.  

By writing it out, you should convince yourself that, for any whole numbers n and m,  

(10n)m = 10nm.  

But what if n is not a whole number? Since the rules we have used so far don't tell us what this would mean, we can define it to mean what we like, but we should choose our definition so that it is consistent. The definition of the logarithm of a number a (to base 10) is this:  

10log a = a.  

In other words, the log of the number a is the power to which you must raise 10 to get the number a. For an example of a number whose log is not a whole number, let's consider the square root of 10, which is 3.1623..., in other words 3.16232 = 10. Using our definition above, we can write this as  

3.16232 = (10log 3.1623)2 = 10 = 101.  

However, using our rule that (10n)m = 10nm, we see that in this case log 3.1623*2 = 1, so the log of 3.1623... is 1/2. The square root of 10 is 100.5. Now there are a couple of questions: how do we calculate logs? and Can we be sure that all real numbers greater than zero have real logs? We leave these to mathematicians (who, by the way, would be happy to give you a more rigorous treatment of exponents that this superficial account).  

A few other important examples are worth noting. 100 would have the property that, no matter

Page 23: Hand Out Sistem Informatika

23  

how many times you multiplied it by itself, it would never get as large as 10. Further, no matter how many times you divided it into 1, you would never get as small as 1/10. Using our (10n)m = 10nm rule, you will see that 100 = 1 satisfies this, so the log of one is zero. The log of 2 is used often in acoustics, and it is 0.3010 (see graph at right). Hence, a factor of 2 in power corresponds to 3.01 dB, which we should normally write as 3 dB because, as you can discover for yourself in hearing response, decimal points of decibels are usually too small to notice.

 

Page 24: Hand Out Sistem Informatika

24  

 

 

 

Page 25: Hand Out Sistem Informatika

25  

 

 

Page 26: Hand Out Sistem Informatika

26  

 

 

1.3 (Multi)Media Data and Multimedia Metadata

This section describes (multi)media data and multimedia metadata. We overview the MPEG coding family (MPEG-1, MPEG-2, MPEG-4) and then introduce multimedia metadata standards for MPEG-7 and, finally, the concepts of (multi)media data and multimedia metadata introduced in MPEG-21. Multimedia metadata models are of obvious interest for the design of content-based multimedia systems, and we refer to them throughout the book.

Page 27: Hand Out Sistem Informatika

27  

1.3.1 (Multi)Media Data 

Given the broad use of images, audio and video data nowadays, it should not come as a surprise that much effort has been put into developing standards for codec, or coding and decoding multimedia data. Realizing that much of the multimedia data are redundant, multimedia codecs use compression algorithms to identify and use redundancy.

MPEG video compression is used in many current and emerging products. It is at the heart of digital television set-top boxes, digital subscriber service, high-definition television decoders, digital video disc players, Internet video, and other applications. These applications benefit from video compression in that they now require less storage space for archived video information, less bandwidth for the transmission of the video information from one point to another, or a combination of both.

The basic idea behind MPEG video compression is to remove both spatial redundancy within a video frame and temporal redundancy between video frames. As in Joint Photographic Experts Group (JPEG), the standard for still-image compression, DCT (Discrete Cosine Transform)-based compression is used to reduce spatial redundancy. Motion compensation or estimation is used to exploit temporal redundancy. This is possible because the images in a video stream usually do not change much within small time intervals. The idea of motion compensation is to encode a video frame based on other video frames temporally close to it.

In addition to the fact that MPEG video compression works well in a wide variety of applications, a large part of its popularity is that it is defined in three finalized international standards: MPEG-1, MPEG-2, and MPEG-4.

MPEG-1 is the first standard (issued in 1993) by MPEG and is intended for medium-quality and medium-bit rate video and audio compression. It allows videos to be compressed by ratios in the range of 50:1 to 100:1, depending on image sequence type and desired quality. The encoded data rate is targeted at 1.5 megabits per second, for this is a reasonable transfer rate for a double-speed CD-ROM player. This rate includes audio and video. MPEG-1 video compression is based on a macroblock structure, motion compensation, and the conditional replenishment of macroblocks. MPEG-1 encodes the first frame in a video sequence in intraframe (I-frame) Each subsequent frame in a certain group of picture (e.g., 15 frames) is coded using interframe prediction (P-frame) or bidirectional prediction (B-frame). Only data from the nearest previously coded I-frame or P-frame is used for prediction of a P-frame. For a B-frame, either the previous or the next I- or P-frame or both are used. Exhibit 1.5 illustrates the principles of the MPEG-1 video compression.

Exhibit 1.5: Principles of the MPEG‐1 video compression technique.  

Page 28: Hand Out Sistem Informatika

28  

 

On the encoder side, the DCT is applied to an 8 × 8 luminance and chrominance block, and thus, the chrominance and luminance values are transformed into the frequency domain. The dominant values (DC values) are in the upper-left corner of the resulting 8 × 8 block and have a special importance. They are encoded relative to the DC coefficient of the previous block (DCPM coding).

Then each of the 64 DCT coefficients is uniformly quantized. The nonzero quantized values of the remaining DCT coefficients and their locations are then zig-zag scanned and run-length entropy coded using length-code tables. The scanning of the quantized DCT two-dimensional image signal followed by variable-length code word assignment for the coefficients serves as a mapping of the two-dimensional image signal into a one-dimensional bit stream. The purpose of zig-zag scanning is to trace the low-frequency DCT coefficients (containing the most energy) before tracing the high-frequency coefficients (which are perceptually not so receivable).

MPEG-1 audio compression is based on perceptual coding schemes. It specifies three audio coding schemes, simply called Layer-1, Layer-2, and Layer-3. Encoder complexity and performance (sound quality per bit rate) progressively increase from Layer-1 to Layer-3. Each audio layer extends the features of the layer with the lower number. The simplest form is Layer-1. It has been designed mainly for the DCC (digital compact cassette), where it is used at 384 kilobits per second (kbps) (called "PASC"). Layer-2 achieves a good sound quality at bit rates down to 192 kbps, and Layer-3 has been designed for lower bit rates down to 32 kbps. A Layer-3 decoder may as well accept audio streams encoded with Layer-2 or Layer-1, whereas a Layer-2 decoder may accept only Layer-1.

Page 29: Hand Out Sistem Informatika

29  

MPEG video compression and audio compression are different. The audio stream flows into two independent blocks of the encoder. The mapping block of the encoder filters and creates 32 equal-width frequency subbands, whereas the psychoacoustics block determines a masking threshold for the audio inputs. By determining such a threshold, the psychoacoustics block can output information about noise that is imperceptible to the human ear and thereby reduce the size of the stream. Then, the audio stream is quantized to meet the actual bit rate specified by the layer used. Finally, the frame packing block assembles the actual bitstream from the output data of the other blocks, and adds header information as necessary before sending it out.

MPEG-2 (issued in 1994) was designed for broadcast television and other applications using interlaced images. It provides higher-picture quality than MPEG-1, but uses a higher data rate. At lower bit rates, MPEG-2 provides no advantage over MPEG-1. At higher bit rates, above about 4 megabits per second, MPEG-2 should be used in preference to MPEG-1. Unlike MPEG-1, MPEG-2 supports interlaced television systems and vertical blanking interval signals. It is used in digital video disc videos.

The concept of I-, P-, and B-pictures is retained in MPEG-2 to achieve efficient motion prediction and to assist random access. In addition to MPEG-1, new motion-compensated field prediction modes were used to efficiently encode field pictures. The top fields and bottom fields are coded separately. Each bottom field is coded using motion-compensated interfield prediction based on the previously coded top field. The top fields are coded using motion compensation based on either the previous coded top field or the previous coded bottom field.

MPEG-2 introduces profiles for scalable coding. The intention of scalable coding is to provide interoperability between different services and to flexibly support receivers with different display capabilities. One of the important purposes of scalable coding is to provide a layered video bit stream that is amenable to prioritized transmission. Exhibit 1.6 depicts the general philosophy of a multiscale video coding scheme.

Exhibit 1.6: Scalable coding in MPEG-2.

 

Page 30: Hand Out Sistem Informatika

30  

MPEG-2 syntax supports up to three different scalable layers. Signal-to-noise ratio scalability is a tool that has been primarily developed to provide graceful quality degradation of the video in prioritized transmission media. Spatial scalability has been developed to support display screens with different spatial resolutions at the receiver. Lower-spatial resolution video can be reconstructed from the base layer. The temporal scalability tool generates different video layers. The lower one (base layer) provides the basic temporal rate, and the enhancement layers are coded with temporal prediction of the lower layer. These layers, when decoded and temporally multiplexed, yield full temporal resolution of the video. Stereo-scopic video coding can be supported with the temporal scalability tool. A possible configuration is as follows. First, the reference video sequence is encoded in the base layer. Then, the other sequence is encoded in the enhancement layer by exploiting binocular and temporal dependencies; that is, disparity and motion estimation or compensation.

MPEG-1 and MPEG-2 use the same family of audio codecs, Layer-1, Layer-2, and Layer-3. The new audio features of MPEG-2 use lower sample rate in Layer-3 to address low-bit rate applications with limited bandwidth requirements (the bit rates extend down to 8 kbps). Furthermore, a multi-channel extension for sound applications with up to five main audio channels (left, center, right, left surround, right surround) is proposed.

A non-ISO extension, called MPEG 2.5, was developed by the Fraunhofer Institute to improve the performance of MPEG-2 Audio Layer-3 at lower bit rates. This extension allows sampling rates of 8, 11.025, and 24 kHz, which is half of that used in MPEG-2. Lowering the sampling rate reduces the frequency response but allows the frequency resolution to be increased, so that the result has a significantly better quality.

The popular MP3 file format is an abbreviation for MPEG-1/2 Layer-3 and MPEG-2.5. It actually uses the MPEG 2.5 codec for small bit rates (<24 kbps). For bit rates higher than 24 kbps, it uses the MPEG-2 Layer-3 codec.

A comprehensive comparison of MPEG-2 Layer-3, MPEG 2.5, MP3, and AAC (see MPEG-4 below) is given in the MP3 overview by Brandenburg.

MPEG-4 (issued in 1999) is the newest video coding standard by MPEG and goes further, from a pure pixel-based approach, that is, from coding the raw signal, to an object-based approach. It uses segmentation and a more advanced scheme of description. Object coding is for the first time implemented in JPEG-2000 (issued in 1994) and MPEG-4. Indeed, MPEG-4 is primarily a toolbox of advanced compression algorithms for audiovisual information, and in addition, it is suitable for a variety of display devices and networks, including low-bit rate mobile networks. MPEG-4 organizes its tools into the following eight parts:

1. ISO-IEC 14496-1 (systems) 2. ISO-IEC 14496-2 (visual) 3. ISO-IEC 14496-3 (audio) 4. ISO-IEC 14496-4 (conformance) 5. ISO-IEC 14496-5 (reference software) 6. ISO-IEC 14496-6 (delivery multimedia integration framework)

Page 31: Hand Out Sistem Informatika

31  

7. ISO-IEC 14496-7 (optimized software for MPEG-4 tools) 8. ISO-IEC 14496-8 (carriage of MPEG-4 contents over Internet protocol networks).

These tools are organized in a hierarchical manner and operate on different interfaces. Exhibit 1.7 illustrates this organization model, which comprises the compression layer, the sync layer, and the delivery layer. The compression layer is media aware and delivery unaware, the sync layer is media unaware and delivery unaware, and the delivery layer is media unaware and delivery aware.

Exhibit 1.7: General organization of MPEG-4.  

 

The compression layer does media encoding and decoding of elementary streams, the sync layer manages elementary streams and their synchronization and hierarchical relations, and the deliver layer ensures transparent access to MPEG-4 content irrespective of the delivery technology used. The following paragraphs briefly describe the main features supported by the MPEG-4 tools. Note also that not all the features of the MPEG-4 toolbox will be implemented in a single application. This is determined by levels and profiles, to be discussed below.

MPEG-4 is object oriented: An MPEG-4 video is a composition of a number of stream objects that build together a complex scene. The temporal and spatial dependencies between the objects have to be described with a description following the BIFS (binary format for scene description).

MPEG-4 Systems provides the functionality to merge the objects (natural or synthetic) and render the product in a single scene. Elementary streams can be adapted (e.g., one object is dropped) or mixed with stored and streaming media. MPEG-4 allows for the creation of a scene consisting of objects originating from different locations. Exhibit 1.8 shows an application example of the object-scalability provided in MPEG-4. The objects that are representing videos are called video objects, or VOs. They may be combined with other VOs (or audio objects and three-dimensional objects) to form a complex scene. One possible scalability option is object dropping, which may be used for adaptation purposes at the server or in the Internet. Another

Page 32: Hand Out Sistem Informatika

32  

option is the changing of objects within a scene; for instance, transposing the positions of the two VOs in Exhibit 1.8. In addition, MPEG-4 defines the MP4 file format. This format is extremely flexible and extensible. It allows the management, exchange, authoring, and presentation of MPEG-4 media applications.

Exhibit 1.8: Object-based scalability in MPEG-4.

 

MPEG-4 Systems also provides basic support to protect and manage content. This is actually moved into the MPEG-21 intellectual property management and protection (IPMP) (MPEG-21, Part 4).

MPEG-4 Visual provides a coding algorithm that is able to produce usable media at a rate of 5 kbps and QCIF (quarter common intermediate format) resolution (176 × 144 pixels). This makes motion video possible on mobile devices. MPEG-4 Visual describes more than 30 profiles that define different combinations of scene resolution, bit rate, audio quality, and so forth to accommodate the different needs of different applications. To mention just two example profiles, there is the simple profile, which provides the most basic audio and video at a bit rate that scales down to 5 kbps. This profile is extremely stable and is designed to operate in exactly the same manner on all MPEG-4 decoders. The studio profile, in contrast, is used for applications involved with digital theater and is capable of bit rates in the 1 Gbps (billions of bits per second) range. Of note is that the popular DivX codec (http://www.divx-digest.com/) is based on the MPEG-4 visual coding using the simple scalable profile.

Fine granular scalability mode allows the delivery of the same MPEG-4 content at different bit rates. MPEG-4 Visual has the ability to take content intended to be experienced with a high-bandwidth connection, extract a subset of the original stream, and deliver usable content to a personal digital assistant or other mobile devices. Error correction, built into the standard, allows users to switch to lower-bit rate streams if connection degradation occurs because of lossy wireless networks or congestion. Objects in a scene may also be assigned a priority level that defines which objects will or will not be viewed during network congestion or other delivery problems.

Page 33: Hand Out Sistem Informatika

33  

MPEG-4 Audio offers tools for natural and synthetic audio coding. The compression is so efficient that good speech quality is achieved at 2 kbps. Synthetic music and sound are created from a rich toolset called Structured Audio. Each MPEG-4 decoder has the ability to create and process this synthetic audio making the sound quality uniform across all MPEG-4 decoders.

In this context, MPEG introduced the advanced audio coding (AAC), which is a new audio coding family based on a psychoacoustics model. Sometimes referred to as MP4, which is misleading because it coincides with the MPEG-4 file format, AAC provides significantly better quality at lower bit rates than MP3. AAC was developed under MPEG-2 and was improved for MPEG-4. Additional tools in MPEG-4 increase the effectiveness of MPEG-2 AAC at lower bit rates and add scalability and error resilience characteristics.

AAC supports a wider range of sampling rates (from 8 to 96 kHz) and up to 48 audio channels and is, thus, more powerful than MP3. Three profiles of AAC provide varying levels of complexity and scalability. MPEG-4 AAC is, therefore, designed as high-quality general audio codec for 3G (third-generation) wireless terminals. However, compared with MP3, AAC software is much more expensive to license, because the companies that hold related patents decided to keep a tighter rein on it.

The last component proposed is the Delivery Multimedia Integration Framework (DIMF). This framework provides abstraction from the transport protocol (network, broadcast, etc) and has the ability to identify delivery systems with a different QoS. DMIF also abstracts the application from the delivery type (mobile device versus wired) and handles the control interface and signaling mechanisms of the delivery system. The framework specification makes it possible to write MPEG-4 applications without indepth knowledge of delivery systems or protocols.

1.3.1.1 Related Video Standards and Joint Efforts. 

There exist a handful other video standards and codecs. Cinepak (http://www.cinepak.com/text.html) was developed by CTI (Compression Technologies, Inc.) to deliver high-quality compressed digital movies for the Internet and game environments.

RealNetworks (http://www.real.com) has developed codecs for audiovisual streaming applications. The codec reduces the spatial resolution and makes a thorough analysis if a frame contributes to motion and shapes and drops it if necessary. Priority is given to the encoding of the audio; that is, audio tracks are encoded first and then the video track is added. If network congestion occurs, the audio takes priority and the picture just drops a few frames to keep up. The Sure Stream feature allows the developer to create up to eight versions of the audio and video tracks. If there is network congestion during streaming, Real Player and the Real Server switch between versions to maintain image quality.

For broadcast applications, especially video conferencing, the ITU-T (http://www.itu.int/ITU-T/) standardized the H.261 (first version in 1990) and H.263 (first version in 1995) codecs. H.261 was designed to work for bit rates, which are multiples of 64 kbps (adapted to Integrated Services Digital Network connections). The H.261 coding algorithm is a hybrid of interframe prediction, transform coding, and motion compensation. Interframe prediction removes temporal

Page 34: Hand Out Sistem Informatika

34  

redundancy. Transform coding removes the spatial redundancy. Motion vectors are used to help the codec to compensate for motion. To remove any further redundancy in the transmitted stream, variable-length coding is used. The coding algorithm of H.263 is similar to H.261; however, it was improved with respect to performance and error recovery. H.263 uses half-pixel precision for motion compensation, whereas H.261 used full-pixel precision and a loop filter. Some parts of the hierarchical structure of the stream are optional in H.263, so the codec can be configured for a lower bit rate or better error recovery. There are several optional negotiable options included to improve performance: unrestricted motion vectors, syntax-based arithmetic coding, advance prediction, and forward and backward frame prediction, which is a generalization of the concepts introduced in MPEG's P-B-frames.

MPEG and ITU-T launched in December 2001 the Joint Video Team to establish a new video coding standard. The new standard, named ISO-IEC MPEG-4 Advanced Coding (AVC-Part 10 of MPEG-4)/ITU H.264, offers significant bit-rate and quality advantages over the previous ITU/MPEG standards. To improve coding efficiency, the macroblock (see Exhibit 1.3) is broken down into smaller blocks that attempt to contain and isolate the motion. Quantization as well as entropy coding was improved. The standard is available as of autumn 2003. More technical information on MPEG-4 AVC may be found at http://www.islogic.com/products/islands/h264.html.

In addition, many different video file formats exist that must be used with a given codec. For instance, Microsoft introduced a standard for incorporating digital video under Windows by the file standard called AVI (Audio Video Interleaved). The AVI format merely defines how the video and audio will be stored, not how they have to be encoded.

1.3.2 Multimedia Metadata 

Metadata describing a multimedia resource, such as an audio or video stream, can be seen from various perspectives, based on who produces or provides the metadata:

• From the content-producer's perspective, typical metadata are bibliographical information of the resource, such as author, title, creation date, resource format, and so forth.

• From the perspective of the service providers, metadata are typically value-added descriptions (mostly in XML format) that qualify information needed for retrieval. These data include various formats, under which a resource is available, and semantic information, such as players of a soccer game. This information is necessary to enable searching with an acceptable precision in multimedia applications.

• From the perspective of the media consumer, additional metadata describing its preferences and resource availability are useful. These metadata personalize content consumption and have to be considered by the producer. Additional metadata are necessary for the delivery over the Internet or mobile networks to guarantee access to the best possible content; for example, metadata describing adaptation of the video is required when the available bandwidth decreases. One issue that needs to be addressed is whether or not an adaptation process is acceptable to the user.

Page 35: Hand Out Sistem Informatika

35  

To describe metadata, various research projects have developed sets of elements to facilitate the retrieval of multimedia resources. Initiatives that appear likely to develop into widely used and general standards for Internet multimedia resources are the Dublin Core Standard, the Metadata Dictionary SMPTE (Society of Motion Picture and Television Engineers), MPEG-7, and MPEG-21. These four standards are general; that is, they do not target a particular industry or application domain and are supported by well-recognized organizations.

Dublin Core is a Resource Description Framework-based standard that represents a metadata element set intended to facilitate the discovery of electronic resources. There have been many papers that have discussed the applicability of Dublin Core to nontextual documents such as images, audio, and video. They have primarily focused on extensions to the core elements through the use of subelements and schemes specific to audiovisual data. The core elements are title, creator, subject, published in, description, publisher, contributor, date, type, format, identifier, source, language, and rights. Dublin Core is currently used as a metadata standard in many television archives.

In this context, one has also to mention the Metadata Dictionary SMPTE. The dictionary is a big collection of registered names and data types, developed mostly for the television and video industries that form the SMPTE membership. Its hierarchical structure allows expansion and mechanisms for data formatting in television and video signals and provides a common method of implementation. Most metadata are media-specific attributes, such as timing information. Semantic annotation is, however, not possible. The SMPTE Web site contains the standards documents (http://www.smpte.org/).

MPEG-7 is an Extensible Markup Language (XML)-based multimedia metadata standard that proposes description elements for the multimedia processing cycle from the capture (e.g., logging descriptors), to analysis and filtering (e.g., descriptors of the MDS [Multimedia Description Schemes]), to delivery (e.g., media variation descriptors), and to interaction (e.g., user preference descriptors). MPEG-7 may, therefore, describe the metadata flow in multimedia applications more adequately than the Dublin Core Standard. There have been several attempts to extend the Dublin Core Standard to describe the multimedia processing cycle. Hunter et al. showed that it is possible to describe both the structure and fine-grained details of video content by using the Dublin Core elements plus qualifiers. The disadvantage of this approach is that the semantic refinement of the Dublin Core through the use of qualifiers may lead to a loss of semantic interoperability. Another advantage of MPEG-7 is that it offers a system's part that allows coding of descriptions (including compression) for streaming and for associating parts of MPEG-7 descriptions to media units, which they describe. MPEG-7 is of major importance for content-based multimedia systems. Detailed information on the practical usage of this standard is given in Chapter 2.

Despite the very complete and detailed proposition of multimedia metadata descriptions in MPEG-7, the aspect of the organization of the infrastructure of a distributed multimedia system cannot be described with metadata alone. Therefore, the new MPEG-21 standard was initiated in 2000 to provide mechanisms for distributed multimedia systems design and associated services. A new distribution entity is proposed and validated: the Digital Item. It is used for interaction with all actors (called users in MPEG-21) in a distributed multimedia system. In particular,

Page 36: Hand Out Sistem Informatika

36  

content management, Intellectual Property Management (IPMP), and content adaptation shall be regulated to handle different service classes. MPEG-21 shall result in an open framework for multimedia delivery and consumption, with a vision of providing content creators and service providers with equal opportunities to an open electronic market. MPEG-21 is detailed in Chapter 3.

Finally, let us notice that in addition MPEG with MPEG-7 and MPEG-21, several other consortia have created metadata schemes that describe the context, presentation, and encoding format of multimedia resources. They mainly address a partial aspect of the use of metadata in a distributed multimedia system. Broadly used standards are:

• W3C (World Wide Web Consortium) has built the Resource Description Framework-based Composite Capabilities/Preference Profiles (CC/PP) protocol. The WAP Forum has used the CC/PP to define a User Agent Profile (UAProf) which describes the characteristics of WAP-enabled devices.

• W3C introduced the Synchronized Multimedia Integration Language (SMIL, pronounced "smile")which enables simple authoring of interactive audiovisual presentations.

• IETF (Internet Engineering Task Force) has created the Protocol-Independent Content Negotiation Protocol (CONNEG), which was released in 1999.

These standards relate to different parts of MPEG; for instance CC/PP relates to the MPEG-21 Digital Item Adaptation Usage Environment, CONNEG from IETF relates to event reporting in MPEG-21, parts of SMIL relate to the MPEG-21 Digital Item Adaptation, and other parts relate to MPEG-4. Thus, MPEG integrated main concepts of standards in use and, in addition, let these concepts work cooperatively in a multimedia framework.

Typical Penggunaan Bandwidth:

Page 37: Hand Out Sistem Informatika

37  

Sound Cards

Page 38: Hand Out Sistem Informatika

38  

Page 39: Hand Out Sistem Informatika

39  

Page 40: Hand Out Sistem Informatika

40  

 

 

Page 41: Hand Out Sistem Informatika

41  

Page 42: Hand Out Sistem Informatika

42  

Page 43: Hand Out Sistem Informatika

43  

Page 44: Hand Out Sistem Informatika

44  

Page 45: Hand Out Sistem Informatika

45  

Page 46: Hand Out Sistem Informatika

46  

Page 47: Hand Out Sistem Informatika

47  

 

Page 48: Hand Out Sistem Informatika

48  

 

   

Page 49: Hand Out Sistem Informatika

49  

Page 50: Hand Out Sistem Informatika

 

50 

Page 51: Hand Out Sistem Informatika

51  

KOMPRESI (COMPRESSION)/ PEMAMPATAN DATA

Karakteristik dari sinyal Audio dan Video terkait pertimbangan pemampatan data:

Page 52: Hand Out Sistem Informatika

52  

Page 53: Hand Out Sistem Informatika

53  

Page 54: Hand Out Sistem Informatika

54  

Page 55: Hand Out Sistem Informatika

55  

Page 56: Hand Out Sistem Informatika

56  

Page 57: Hand Out Sistem Informatika

57  

Page 58: Hand Out Sistem Informatika

58  

Page 59: Hand Out Sistem Informatika

59  

Page 60: Hand Out Sistem Informatika

60  

VOIP:

Page 61: Hand Out Sistem Informatika

61  

Page 62: Hand Out Sistem Informatika

62  

Page 63: Hand Out Sistem Informatika

63  

Page 64: Hand Out Sistem Informatika

64  

Page 65: Hand Out Sistem Informatika

65  

Page 66: Hand Out Sistem Informatika

66  

Page 67: Hand Out Sistem Informatika

67  

Page 68: Hand Out Sistem Informatika

68  

Page 69: Hand Out Sistem Informatika

69  

Page 70: Hand Out Sistem Informatika

70  

Page 71: Hand Out Sistem Informatika

71  

Page 72: Hand Out Sistem Informatika

72  

Page 73: Hand Out Sistem Informatika

73  

Page 74: Hand Out Sistem Informatika

74  

Page 75: Hand Out Sistem Informatika

75  

Page 76: Hand Out Sistem Informatika

76  

Page 77: Hand Out Sistem Informatika

77  

Page 78: Hand Out Sistem Informatika

78  

Page 79: Hand Out Sistem Informatika

79  

Page 80: Hand Out Sistem Informatika

80  

Page 81: Hand Out Sistem Informatika

81  

Page 82: Hand Out Sistem Informatika

82  

Page 83: Hand Out Sistem Informatika

83  

Page 84: Hand Out Sistem Informatika

84  

Page 85: Hand Out Sistem Informatika

85  

Page 86: Hand Out Sistem Informatika

86  

Page 87: Hand Out Sistem Informatika

87  

Page 88: Hand Out Sistem Informatika

88  

Page 89: Hand Out Sistem Informatika

89  

Page 90: Hand Out Sistem Informatika

90  

Page 91: Hand Out Sistem Informatika

91  

Page 92: Hand Out Sistem Informatika

92  

Page 93: Hand Out Sistem Informatika

93  

Page 94: Hand Out Sistem Informatika

94  

Page 95: Hand Out Sistem Informatika

95  

Page 96: Hand Out Sistem Informatika

96  

Page 97: Hand Out Sistem Informatika

97  

Page 98: Hand Out Sistem Informatika

98  

Page 99: Hand Out Sistem Informatika

99  

Page 100: Hand Out Sistem Informatika

100  

Page 101: Hand Out Sistem Informatika

101  

Page 102: Hand Out Sistem Informatika

102  

Page 103: Hand Out Sistem Informatika

103  

Page 104: Hand Out Sistem Informatika

104  

Page 105: Hand Out Sistem Informatika

105  

Page 106: Hand Out Sistem Informatika

106  

Page 107: Hand Out Sistem Informatika

107  

Page 108: Hand Out Sistem Informatika

108  

Page 109: Hand Out Sistem Informatika

109  

Page 110: Hand Out Sistem Informatika

110  

Page 111: Hand Out Sistem Informatika

111  

KASUS APLIKASI KOMUNIKASI TERINTEGRASI:

Page 112: Hand Out Sistem Informatika

112  

Page 113: Hand Out Sistem Informatika

113  

Kasus devais VOD

Page 114: Hand Out Sistem Informatika

114  

Sumber‐sumber: 

1. Steinmetz, R., Multimedia Technology, 2nd ed., Springer-Verlag, Heidelberg, 2000.

Page 115: Hand Out Sistem Informatika

115  

2. Chiariglione, L., Short MPEG-1 description (final). ISO/IECJTC1/SC29/WG11 N MPEG96, June 1996, http://www.chiariglione.org/mpeg/.

3. Chiariglione, L., Short MPEG-2 description (final). ISO/IECJTC1/SC29/WG11 N MPEG00, October 2000, http://www.chiariglione.org/mpeg/.

4. Brandenburg, K., MP3 and AAC explained, in Proceedings of the 17th AES International Conference on High Quality Audio Coding, Florence/Italy, 1999.

5. Koenen, R., MPEG-4 overview. ISO/IEC JTC1/SC29/WG11 N4668, (Jeju Meeting) March 2002, http://www.chiariglione.org/mpeg/.

6. Pereira, F., Tutorial issue on the MPEG-4 standard, Image Comm., 15, 2000. 7. Ebrahimi, T. and Pereira, F., The MPEG-4 Book, Prentice-Hall, Englewood Cliffs, NJ,

2002. 8. Bouras, C., Kapoulas, V., Miras, D., Ouzounis, V., Spirakis, P., and Tatakis, A., On-

demand hypermedia/mutimedia service using pre-orchestrated scenarios over the Internet, Networking Inf. Syst. J., 2, 741–762, 1999.

9. Chiariglione, L., Short MPEG-1 description (final). ISO/IECJTC1/SC29/WG11 N MPEG96, June 1996, http://www.chiariglione.org/mpeg/.

10. Koenen, R., MPEG-4 overview. ISO/IEC JTC1/SC29/WG11 N4668, (Jeju Meeting) March 2002, http://www.chiariglione.org/mpeg/.

11. Pereira, F., Tutorial issue on the MPEG-4 standard, Image Comm., 15, 2000. 12. Ebrahimi, T. and Pereira, F., The MPEG-4 Book, Prentice-Hall, Englewood Cliffs, NJ,

2002. 13. Not in the sense of object-oriented programming or modeling thoughts. 14. Gecsei, J., Adaptation in distributed multimedia systems, IEEE MultiMedia, 4, 58–66,

1997. 15. Hunter, J. and Armstrong, L., A comparison of schemas for video metadata

representation. Comput. Networks, 31, 1431–1451, 1999. 16. Dublin Core Metadata Initiative, Dublin core metadata element set, version 1.1:

Reference description, http://www.dublincore.org/documents/dces/, 1997. 17. Martínez, J.M., Overview of the MPEG-7 standard. ISO/IEC JTC1/SC29/WG11 N4980

(Klagenfurt Meeting), July 2002, http://www.chiariglione.org/mpeg/. 18. Dublin Core Metadata Initiative, Dublin core metadata element set, version 1.1:

Reference description, http://www.dublincore.org/documents/dces/, 1997. 19. Hunter, J., A proposal for the Integration of Dublin Core and MPEG-7, ISO/IEC

JTC1/SC29/WG11 M6500, 54th MPEG Meeting, La Baule, October 2000, http://archive.dstc.edu.au/RDU/staff/jane-hunter/m6500.zip.

20. Hill, K. and Bormans, J., Overview of the MPEG-21 Standard. ISO/IECJTC1/SC29/WG11 N4041 (Shanghai Meeting), October 2002, http://www.chiariglione.org/mpeg/.

21. http://www.w3.org/Mobile/CCPP/. 22. http://www.w3.org/AudioVideo/. 23. http://www.imc.org/ietf-medfree/index.html.

 

 

 

Page 116: Hand Out Sistem Informatika

116