Tuesday, March 30, 2010

MPEG VIDEO OVERVIEW

Although the MPEG-1 standard is quite flexible, the basic algorithms have been tuned to work well at data rates from 1 to 1.5 Mbps, at resolutions of about 350 by 250 Pixels at picture rates of up to 25 or 30 pictures per second. MPEG-1 codes progressively-scanned images and does not recognized the concept of interlace, interlaced source video must be converted to a non-interlace format prior to encoding. The format of the coded video allows forward play and pause, typical coding and decoding methods allow random access, fast forward and reverse play also, the requirements for these functions are very much application dependent and different encoding techniques will include varying levels of flexibility to account for these functions. Compression of the digitized video comes from the use of several techniques: Sub sampling of the chroma information to match the human visual system, differential coding to exploit spatial redundancy, motion compensation to exploit temporal redundancy, Discrete Cosine Transform (DCT) to match typical image statistics, quantization, variable length coding, entropy coding and use of interpolated pictures.
Algorithm Structure and Terminology
The MPEG hierarchy is arranged into layers (Figure 1).
clip_image002
This layered structure is designed for flexibility and management efficiency, each layer is intended to support a specific function i.e. the sequence layer specifies sequence parameters such as picture size, aspect ratio, picture rate, bit rate etcetera , whereas the picture layer defines parameters such as the temporal reference and picture type. This layered structure improves ro-bustness and reduces susceptibility to data corruption.
For convenience of coding, macroblocks are divided into six blocks of component Pixels four luma and two chroma (Cr and Cb) (Figure 2).
clip_image003
Blocks are the basic coding unit and the DCT is applied at this block level. Each block contains 64 component Pixels arranged in an 8x8 array (Figure 3).
clip_image004
There are four picture types : I pictures or INTRA pictures, which are coded without reference to any other pictures; P pictures or PREDICTED pictures which are coded using motion compensation from previous picture; B pictures or BIDIRECTIONALLY predicted pictures which are coded using interpolation from a previous and a future picture and D pictures or DC pictures in which only the low frequency component is coded and which are only intended for fast forward search mode. B and P pictures are often called Inter pictures. Some other terminology that is often used are the terms M and N, M+1 represents the number of frames between successive I and P pictures whereas N+1 represents the number of frames between successive I pictures. M and N can be varied according to different applications and requirements such as fast random access.
A typical coding scheme will contain a mix of I,P and B pictures. A typical scheme will have an I picture every 10 to 15 pictures and two B pictures between succesive I and P pictures (Figure 4).
clip_image006
Prediction (P Frame)
The predicted picture is the previous picture modified by motion compensation. Motion vectors are calculated for each macroblock. The motion vector is applied to all four luminance blocks in the macro block. The motion vector for both chrominance blocks is calculated from the luma vector. This technique relies upon the assumption that within a macroblock the difference between successive pictures can be represented simply as a vector transform (i.e. there is very little difference between successive pictures, the key difference being in position of the Pixels) (Figure 5).
clip_image008
Interpolation ( I Frame)
Interpolation (or bidirectional prediction) generates high compression in that the picture is represented simply as an interpolation between the past and future I or P pictures (again this is performed on a Pictures are not transmitted in display order but in the order in which the decoder requires them to decode the bitstream (the decoder must of course have the reference picture(s) before any interpolated or predicted pictures can be decoded).

MULTIMEDIA DATABASE SYSTEM

GenXTechno Tags: , , ,
A multimedia database is a database
that hosts one or more primary media file types such as .txt (documents), .jpg (images), .swf (videos), .mp3 (audio), etc. And loosely fall into three main categories:
  • Static media (time-independent, i.e. images and handwriting)
  • Dynamic media (time-dependent, i.e. video and sound bytes)
  • Dimensional media (i.e. 3D games or computer-aided drafting programs- CAD)
All primary media files are stored in binary strings of zeros and ones, and are encoded according to file type.
The term "data" is typically referenced from the computer point of view, whereas the term "multimedia" is referenced from the user point of view.

Types of Multimedia Databases

There are numerous different types of multimedia databases, including:
  • The Authentication Multimedia Database (also known as a Verification Multimedia Database, i.e. retina scanning), is a 1:1 data comparison
  • The Identification Multimedia Database is a data comparison of one-to-many (i.e. passwords and personal identification numbers
  • A newly-emerging type of multimedia database, is the Biometrics Multimedia Database; which specializes in automatic human verification based on the algorithms of their behavioral or physiological profile.
This method of identification is superior to traditional multimedia database methods requiring the typical input of personal identification numbers and passwords-
Due to the fact that the person being identified does not need to be physically present, where the identification check is taking place.
This removes the need for the person being scanned to remember a PIN or password. Fingerprint identification technology is also based on this type of multimedia database.

Difficulties Involved with Multimedia Databases

The difficulty of making these different types of multimedia databases readily accessible to humans is:
  • The tremendous amount of bandwidth they consume;
  • Creating Globally-accepted data-handling platforms, such as Joomla, and the special considerations that these new multimedia database structures require.
  • Creating a Globally-accepted operating system, including applicable storage and resource management programs need to accommodate the vast Global multimedia information hunger.
  • Multimedia databases need to take into accommodate various human interfaces to handle 3D-interactive objects, in an logically-perceived manner (i.e. SecondLife.com).
  • Accommodating the vast resources required to utilize artificial intelligence to it's fullest potential- including computer sight and sound analysis methods.
  • The historic relational databases (i.e the Binary Large Objects - BLOBs- developed for SQL databases to store multimedia data) do not conveniently support content-based searches for multimedia content.
This is due to the relational database not being able to recognize the internal structure of a Binary Large Object and therefore internal multimedia data components cannot be retrieved...
Basically, a relational database is an "everything or nothing" structure- with files retrieved and stored as a whole, which makes a relational database completely inefficient for making multimedia data easily accessible to humans.
In order to effectively accommodate multimedia data, a database management system, such as an Object Oriented Database (OODB) or Object Relational Database Management System (ORDBMS).
Examples of Object Relational Database Management Systems include Odaptor (HP): UniSQL, ODB-II, and Illustra.
The flip-side of the coin, is that unlike non-multimedia data stored in relational databases, multimedia data cannot be easily indexed, retrieved or classified, except by way of social bookmarking and ranking-rating, by actual humans.
This is made possible by metadata retrieval methods, commonly referred to as tags, and tagging. This is why you can search for dogs, as an example, and a picture comes up based on your text search tem.
This is also referred to a schematic mode. Whereas doing a search with a picture of a dog to locate other dog pictures is referred to as paradigmatic mode.
However, metadata retrieval, search, and identify methods severely lack in being able to properly define uniform space and texture descriptions, such as the spatial relationships between 3D objects, etc.
The Content-Based Retrieval multimedia database search method (CBR), however, is specifically based on these types of searches. In other words, if you were to search an image or sub-image; you would then be shown other images or sub-images that related in some way to your the particular search, by way of color ratio or pattern, etc.





MULTIMEDIA DATABASE SERVER
Characteristics of Multimedia Data

  • Large number of objects
  • Large object sizes
  • Very high dimensionality
  • Retrieval by content
  • Similar by not exactly the same
  • Real-time constraints
  • Spatial and temporal dependencies e.g., as in video data
Features of a Multimedia Server

  • Support for a variety of multimedia types and formats
  • Real-time guarantees
  • Scalable
  • Reliable

Client/Server Multimedia System

  • Centralized server
  • Uses the server host to perform all of file system functions
  • Storage elements behind the server
  • Server becomes bottleneck with increasing users
clip_image005




Scalability of a Multimedia Server
  • Scale up with increasing user pool
  • Should not involve centralized entity
  • Distribute work among participating entities
  • Provide real-timeliness
Architecture for Distributed Multimedia Server


clip_image006

Image File Formats

GenXTechno Tags: , , , , , ,

GIF
GIF was developed by CompuServe to show images online (in 1987 for 8 bit video boards, before JPG and 24 bit color was in use). GIF uses indexed color, which is limited to a palette of only 256 colors (next page). GIF was a great match for the old 8 bit 256 color video boards, but is inappropriate for today's 24 bit photo images.
GIF files do NOT store the image's scaled resolution ppi number, so scaling is necessary every time one is printed. This is of no importance for screen or web images. GIF file format was designed for CompuServe screens, and screens don't use ppi for any purpose. Our printers didn't print images in 1987, so it was useless information, and CompuServe simply didn't bother to store the printing resolution in GIF files.
GIF is still an excellent format for graphics, and this is its purpose today, especially on the web. Graphic images (like logos or dialog boxes) use few colors. Being limited to 256 colors is not important for a 3 color logo. A 16 color GIF is a very small file, much smaller, and more clear than any JPG, and ideal for graphics on the web.

Tag Image File Format (TIFF)

Many image file formats have an image header with fixed fields containing information such as image dimensions, color space specification, etc. The TIFF file format is different in that it allows for a flexible set of information fields. There exists a specification for many of these information fields, called 'tags', ranging from the most fundamental, like image dimensions, over the most luxurious like copyright information, up to so-called 'private tags' or 'custom tags' that you can define to hold your own application specific information. The TIFF specification defines a framework for an image header called 'IFD' (Image File Directory) that is essentially a flexible set of specifically those tags that the TIFF writer software wishes to specify.
One final important difference between TIFF and most other image file formats is that TIFF defines support for multiple images in a single file. Such a file is then called 'multi-page' TIFF. Thus, the TIFF format is very well suited to e.g. store the many pages of a single fax in a single file.
Another major difference between most other image file formats and TIFF, is that TIFF allows for a wide range of different compression schemes and color spaces.
BMP
BMP is a standard file format for computers running the Windows operating system. The format was developed by Microsoft for storing bitmap files in a device-independent bitmap (DIB) format that will allow Windows to display the bitmap on any type of display device. The term “device independent” means that the bitmap specifies pixel color in a form independent of the method used by a display to represent color.

General information

Since BMP is a fairly simple file format, its structure is pretty straightforward. Each bitmap file contains:
  • a bitmap-file header: this contains information about the type, size, and layout of a device-independent bitmap file.
  • a bitmap-information header which specifies the dimensions, compression type, and color format for the bitmap.
  • a colour table, defined as an array of RGBQUAD structures, contains as many elements as there are colours in the bitmap. The colour table is not present for bitmaps with 24 color bits because each pixel is represented by 24-bit red-green-blue (RGB) values in the actual bitmap data area.
  • an array of bytes that defines the bitmap bits. These are the actual image data, represented by consecutive rows, or “scan lines,” of the bitmap. Each scan line consists of consecutive bytes representing the pixels in the scan line, in left-to-right order.
BMP files always contain RGB data. The file can be:
  • 1-bit: 2 colors (monochrome)
  • 4-bit: 16 colors
  • 8-bit: 256 colors.
  • 24-bit: 16777216 colors, mixes 256 tints of Red with 256 tints of Green and Blue

Portable Network Graphics (PNG)

The Portable Network Graphics (PNG) format was designed to replace the older and simpler GIF format and, to some extent, the much more complex TIFF format.
PNG really has three main advantages over GIF: alpha channels (variable transparency), gamma correction (cross-platform control of image brightness), and two-dimensional interlacing (a method of progressive display). PNG also compresses better than GIF in almost every case, but the difference is generally only around 5% to 25%, not a large enough factor to encourage folks to switch on that basis alone. One GIF feature that PNG does not try to reproduce is multiple-image support, especially animations; PNG was and is intended to be a single-image format only.
CGM (Computer Graphics Metafile)

It was specifically designed as a common format for the platform-independent interchange of bitmap and vector data, and for use in conjunction with a variety of input and output devices.
CGM uses three types of syntactical encoding formats. All CGM files contain data encoded using one of these three methods:
  • Character-based, used to produce the smallest possible file size for ease of storage and speed of data transmission
  • Binary encoded, which facilitates exchange and quick access by software applications
  • Clear-text encoded, designed for human readability and ease of modification using an ASCII text editor
CGM is intended for the storage of graphics data only. It is sometimes (erroneously) thought to be a data transfer standard for CAD/CAM data, like IGES, or a 3D graphic object model data storage standard. However, CGM is quite suited for the interchange of renderings from CAD/CAM systems, but not for the storage of the engineering model data itself.

Scalable Vector Graphics (SVG)

SVG is a language for describing two-dimensional graphics and graphical applications in XML. SVG 1.1 is a W3C Recommendation and is the most recent version of the full specification. SVG Tiny 1.2 is a W3C Recommendation, and targets mobile devices. There are various SVG modules under development which will extend previous versions of the specification, and which will serve as the core of future SVG developments.

Video on Demand

Pay-per-view (PPV) services could be considered a primitive form of distributing media on demand. This requires the subscriber to sign-up for an account and thus enabling him access the service. The subscriber is being charged for installation and a periodic rental. This scheme is different from pure broadcast in the sense that it provides the subscriber the control to receive according to his subscription.
Quasi Video-on-Demand (Q-VoD) services, takes selective subscription a little more ahead by multicasting media content amongst a group of users who share a common set of interests. To access media content that is not available in a particular group a subscriber belongs to, he can switch between groups. Near video-on-demand (N-VoD) services simulate media access control functions like forward and reverse in discrete time intervals. This capability is usually facilitated by providing multiple channels with the same media content, skewed in time.
All these concepts collate to the introduction of a True Video on Demand system. To provide control to the subscriber, a True Video-on-demand system requires a feedback mechanism installed at the subscriber device that aids the Video-on Demand service engine control the rate of data transfer over the network. Thus depending on the network bandwidth the service provider, signals the underlying encoding engine to manipulate the media encoding bit rate so that media can be delivered to the subscriber trading off between the quality or the request-response latency.
The Challenges
1. Load distribution on server: To support multiple connection requests from user, and facilitate minimum response time.
2. Media content management: This includes high storage space, effective content management, replication strategy etc.
3. Adapt to dynamic network bandwidth: As the client-server link may not always be consistent, one needs to manage the content corresponding to network change and still maintain quality of the media.
4. Decide on Buffer/Cache: To facilitate user with better quality and high-response time, the system may have to decide upon the buffer size and cache.
5. Rate control: For adapting to network, the system may need to vary the transport and encoding rates.
6. Scalability and cost effectiveness.
7. To provide reliability and availability
In addition to these parameters, such a set-up needs to be highly fault-tolerant and fairly scalable to ensure subscriber satisfaction. Several architectures are proposed in this regard which address the above mentioned issues by employing expensive hardware infrastructure.

Computer Graphics

GenXTechno Tags: , , , , ,
Computer displays are made up from grids of small rectangular cells called pixels. The picture is built up from these cells. The smaller and closer the cells are together, the better the quality of the image, but the bigger the file needed to store the data. If the number of pixels is kept constant, the size of each pixel will grow and the image becomes grainy (pixellated) when magnified, as the resolution of the eye enables it to pick out individual pixels.
Vector graphics is the use of geometrical primitives such as points, lines, curves, and shapes or polygon(s), which are all based on mathematical equations, to represent images in computer graphics.
Vector graphics files store the lines, shapes and colours that make up an image as mathematical formulae. A vector graphics program uses these mathematical formulae to construct the screen image, building the best quality image possible, given the screen resolution. The mathematical formulae determine where the dots that make up the image should be placed for the best results when displaying the image. Since these formulae can produce an image scalable to any size and detail, the quality of the image is only determined by the resolution of the display, and the file size of vector data generating the image stays the same. Printing the image to paper will usually give a sharper, higher resolution output than printing it to the screen but can use exactly the same vector data file.
3D Graphics

A picture that has or appears to have height, width and depth is three-dimensional (or 3-D). A picture that has height and width but no depth is two-dimensional (or 2-D).
clip_image002
Take a look at the triangles above. Each of the triangles on the left has three lines and three angles -- all that's needed to tell the story of a triangle. We see the image on the right as a pyramid -- a 3-D structure with four triangular sides. Note that it takes five lines and six angles to tell the story of a pyramid -- nearly twice the information required to tell the story of a triangle.

What Are 3-D Graphics?

For many of us, games on a computer or advanced game system are the most common ways we see 3-D graphics. These games, or movies made with computer-generated images, have to go through three major steps to create and present a realistic 3-D scene:
  1. Creating a virtual 3-D world.
  2. Determining what part of the world will be shown on the screen.
  3. Determining how every pixel on the screen will look so that the whole image appears as realistic as possible.

What Is Animation?

What is animation? To put it simply, animation is the illusion of movement. When you watch television, you see lots of things moving around. You are really being tricked into believing that you are seeing movement. In the case of television, the illusion of movement is created by displaying a rapid succession of images with slight changes in the content. The human eye perceives these changes as movement because of its low visual acuity. The human eye can be tricked into perceiving movement with as low as 12 frames of movement per second. It should come as no surprise that frames per second (fps) is the standard unit of measure for animation. It should also be no surprise that computers use the same animation technique as television sets to trick us into seeing movement.

Types of Animation

Frame-Based Animation
Frame-based animation is the simpler of the animation techniques. It involves simulating movement by displaying a sequence of static frames. A movie is a perfect example of frame-based animation; each frame of the film is a frame of animation. When the frames are shown in rapid succession, they create the illusion of movement. In frame-based animation, there is no concept of an object distinguishable from the background; everything is reproduced on each frame. This is an important point, because it distinguishes frame-based animation from cast-based animation.
Cast-Based Animation
Cast-based animation, which also is called sprite animation, is a very popular form of animation and has seen a lot of usage in games. Cast-based animation involves objects that move independently of the background. At this point, you may be a little confused by the use of the word "object" when referring to parts of an image. In this case, an object is something that logically can be thought of as a separate entity from the background of an image. For example, in the animation of a forest, the trees might be part of the background, but a deer would be a separate object moving independently of the background.
Each object in a cast-based animation is referred to as a sprite, and can have a changing position. Almost every video game uses sprites to some degree. For example, every object in the classic Asteroids game is a sprite moving independently of the other objects. Sprites generally are assigned a position and a velocity, which determine how they move.

Sprite Animation

Sprite animation involves the movement of individual graphic objects called sprites. Unlike simple frame animation, sprite animation involves considerably more overhead. More specifically, it is necessary not only to develop a sprite class, but also a sprite management class for keeping up with all the sprites. This is necessary because sprites need to be able to interact with each other through a common interface.
Shading

Shading is a process used in drawing for depicting levels of darkness on paper by applying media more densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter areas.
Flat shading
  • Entire surface (polygon) has one colour
  • Cheapest to compute, and least accurate (so you need a dense triangulation for decent-looking results)
  • OpenGL – glShadeModel(GL_FLAT)
Phong shading
  • Compute illumination for every pixel during scan conversion
  • Interpolate normal at each pixel too
  • Expensive, but more accurate
  • Not supported in OpenGL (directly)
Gouraud shading
  • Just compute illumination at vertices
  • Interpolate vertex colours across polygon pixels
  • Cheaper, but less accurate (spreads highlights)
  • OpenGL - glShadeModel(GL_SMOOTH)
Phong illumination
  • Don’t confuse shading and illumination!
  • Shading describes how to apply an illumination model to a polygonal surface patch
  • All these shading methods could use Phong illumination (ambient, diffuse, and specular) or any other local illumination model
Anti-Aliasing
anti-aliasing is the technique of minimizing the distortion artifacts known as aliasing when representing a high-resolution signal at a lower resolution. Anti-aliasing is used in digital photography, computer graphics, digital audio, and many other applications.
Anti-aliasing means removing signal components that have a higher frequency than is able to be properly resolved by the recording (or sampling) device. This removal is done before (re)sampling at a lower resolution. When sampling is performed without removing this part of the signal, it causes undesirable artifacts such as the black-and-white noise near the top of figure 1-a below.
clip_image003 clip_image005 clip_image007
Aliasing Anti-Aliased Anti-Aliased
Another method for reducing jaggies is called smoothing, in which the printer changes the size and horizontal alignment of dots to make curves smoother.
Antialiasing is sometimes called oversampling.
Morphing
Morphing is a special effect in motion pictures and animations that changes (or morphs) one image into another through a seamless transition.
Morphing is an image processing technique used for the metamorphosis from one image to another. The idea is to get a sequence of intermediate images which when put together with the original images would represent the change from one image to the other. The simplest method of transforming one image into another is to cross-dissolve between them. In this method, the color of each pixel is interpolated over time from the first image value to the corresponding second image value. This is not so effective in suggesting the actual metamorphosis. For morphs between faces, the metamorphosis does not look good if the two faces do not have the same shape approximately.
clip_image009 clip_image011
clip_image013
The following examples show some of the uses of warping. The first set of images shows how facial features and/or expressions can be manipulated. The second set shows how the overall shape of the image can be distorted (e.g., to match the shape of a second image for use in the morphing algorithm).

Sunday, March 28, 2010

File Handling in C Language

GenXTechno Tags: , , , , ,
In this section, we will discuss about files which are very important for storing information permanently. We store information in files for many purposes, like data processing by our programs.

What is a File?

Abstractly, a file is a collection of bytes stored on a secondary storage device, which is generally a disk of some kind. The collection of bytes may be interpreted, for example, as characters, words, lines, paragraphs and pages from a textual document; fields and records belonging to a database; or pixels from a graphical image. The meaning attached to a particular file is determined entirely by the data structures and operations used by a program to process the file. It is conceivable (and it sometimes happens) that a graphics file will be read and displayed by a program designed to process textual data. The result is that no meaningful output occurs (probably) and this is to be expected. A file is simply a machine decipherable storage media where programs and data are stored for machine usage.
Essentially there are two kinds of files that programmers deal with text files and binary files. These two classes of files will be discussed in the following sections.

ASCII Text files

A text file can be a stream of characters that a computer can process sequentially. It is not only processed sequentially but only in forward direction. For this reason a text file is usually opened for only one kind of operation (reading, writing, or appending) at any given time.
Similarly, since text files only process characters, they can only read or write data one character at a time. (In C Programming Language, Functions are provided that deal with lines of text, but these still essentially process data one character at a time.) A text stream in C is a special kind of file. Depending on the requirements of the operating system, newline characters may be converted to or from carriage-return/linefeed combinations depending on whether data is being written to, or read from, the file. Other character conversions may also occur to satisfy the storage requirements of the operating system. These translations occur transparently and they occur because the programmer has signalled the intention to process a text file.

Binary files

A binary file is no different to a text file. It is a collection of bytes. In C Programming Language a byte and a character are equivalent. Hence a binary file is also referred to as a character stream, but there are two essential differences.
  1. No special processing of the data occurs and each byte of data is transferred to or from the disk unprocessed.
  2. C Programming Language places no constructs on the file, and it may be read from, or written to, in any manner chosen by the programmer.
Binary files can be either processed sequentially or, depending on the needs of the application, they can be processed using random access techniques. In C Programming Language, processing a file using random access techniques involves moving the current file position to an appropriate place in the file before reading or writing data. This indicates a second characteristic of binary files and they a generally processed using read and write operations simultaneously.
For example, a database file will be created and processed as a binary file. A record update operation will involve locating the appropriate record, reading the record into memory, modifying it in some way, and finally writing the record back to disk at its appropriate location in the file. These kinds of operations are common to many binary files, but are rarely found in applications that process text files.

Creating a file and output some data

In order to create files we have to learn about File I/O i.e. how to write data into a file and how to read data from a file. We will start this section with an example of writing data to a file. We begin as before with the include statement for stdio.h, then define some variables for use in the example including a rather strange looking new type.
/* Program to create a file and write some data the file */

#include <stdio.h>

main( )

{

     FILE *fp;

     char stuff[25];

     int index;

     fp = fopen("TENLINES.TXT","w"); /* open for writing */

     strcpy(stuff,"This is an example line.");

     for (index = 1; index <= 10; index++)

        fprintf(fp,"%s Line number %d\n", stuff, index);

     fclose(fp); /* close the file before ending program */

}


The type FILE is used for a file variable and is defined in the stdio.h file. It is used to define a file pointer for use in file operations. Before we can write to a file, we must open it. What this really means is that we must tell the system that we want to write to a file and what the file name is. We do this with the fopen() function illustrated in the first line of the program. The file pointer, fp in our case, points to the file and two arguments are required in the parentheses, the file name first, followed by the file type.


The file name is any valid DOS file name, and can be expressed in upper or lower case letters, or even mixed if you so desire. It is enclosed in double quotes. For this example we have chosen the name TENLINES.TXT. This file should not exist on your disk at this time. If you have a file with this name, you should change its name or move it because when we execute this program, its contents will be erased. If you don’t have a file by this name, that is good because we will create one and put some data into it. You are permitted to include a directory with the file name.The directory must, of course, be a valid directory otherwise an error will occur. Also, because of the way C handles literal strings, the directory separation character ‘\’ must be written twice. For example, if the file is to be stored in the \PROJECTS sub directory then the file name should be entered as “\\PROJECTS\\TENLINES.TXT”. The second parameter is the file attribute and can be any of three letters, r, w, or a, and must be lower case.

Reading (r)

When an r is used, the file is opened for reading, a w is used to indicate a file to be used for writing, and an a indicates that you desire to append additional data to the data already in an existing file. Most C compilers have other file attributes available; check your Reference Manual for details. Using the r indicates that the file is assumed to be a text file. Opening a file for reading requires that the file already exist. If it does not exist, the file pointer will be set to NULL and can be checked by the program.

Here is a small program that reads a file and display its contents on screen.

/* Program to display the contents of a file on screen */

#include <stdio.h>

void main()

{

   FILE *fopen(), *fp;

   int c;

   fp = fopen("prog.c","r");
   c = getc(fp) ;

   while (c!= EOF)

   {

               putchar(c);

               c = getc(fp);

   }

   fclose(fp);

}

Writing (w)

When a file is opened for writing, it will be created if it does not already exist and it will be reset if it does, resulting in the deletion of any data already there. Using the w indicates that the file is assumed to be a text file.

Here is the program to create a file and write some data into the file.

#include <stdio.h>

int main()

{

FILE *fp;

file = fopen("file.txt","w");   /*Create a file and add text*/

fprintf(fp,"%s","This is just an example :)"); /*writes data to the file*/

fclose(fp); /*done!*/

return 0;

}

Appending (a)
When a file is opened for appending, it will be created if it does not already exist and it will be initially empty. If it does exist, the data input point will be positioned at the end of the present data so that any new data will be added to any data that already exists in the file. Using the a indicates that the file is assumed to be a text file.

Here is a program that will add text to a file which already exists and there is some text in the file.

#include <stdio.h>

int main()

{

    FILE *fp

    file = fopen("file.txt","a");

    fprintf(fp,"%s","This is just an example :)"); /*append some text*/

    fclose(fp);

    return 0;

}

Outputting to the file


The job of actually outputting to the file is nearly identical to the outputting we have already done to the standard output device. The only real differences are the new function names and the addition of the file pointer as one of the function arguments. In the example program, fprintf replaces our familiar printf function name, and the file pointer defined earlier is the first argument within the parentheses. The remainder of the statement looks like, and in fact is identical to, the printf statement.

Closing a file


To close a file you simply use the function fclose with the file pointer in the parentheses. Actually, in this simple program, it is not necessary to close the file because the system will close all open files before returning to DOS, but it is good programming practice for you to close all files in spite of the fact that they will be closed automatically, because that would act as a reminder to you of what files are open at the end of each program.

You can open a file for writing, close it, and reopen it for reading, then close it, and open it again for appending, etc. Each time you open it, you could use the same file pointer, or you could use a different one. The file pointer is simply a tool that you use to point to a file and you decide what file it will point to. Compile and run this program. When you run it, you will not get any output to the monitor because it doesn’t generate any. After running it, look at your directory for a file named TENLINES.TXT and type it; that is where your output will be. Compare the output with that specified in the program; they should agree! Do not erase the file named TENLINES.TXT yet; we will use it in
Some of the other examples in this section.


Reading from a text file


Now for our first program that reads from a file. This program begins with the familiar include, some data definitions, and the file opening statement which should require no explanation except for the fact that an r is used here because we want to read it.


#include <stdio.h>

   main( )

   {

     FILE *fp;

     char c;

     funny = fopen("TENLINES.TXT", "r");
          if (fp == NULL)

               printf("File doesn't exist\n");

     else 
{

      do 
{

                c = getc(fp); /* get one character from the file*/

                putchar(c); /* display it on the monitor */

        } while (c != EOF); /* repeat until EOF (end of file)*/

     }

    fclose(fp);

   }

In this program we check to see that the file exists, and if it does, we execute the main body of the program. If it doesn’t, we print a message and quit. If the file does not exist, the system will set the pointer equal to NULL which we can test. The main body of the program is one do while loop in which a single character is read from the file and output to the monitor until an EOF (end of file) is detected from the input file. The file is then closed and the program is terminated. At this point, we have the potential for one of the most common and most perplexing problems of programming in C. The variable returned from the getc function is a character, so we can use a char variable for this purpose. There is a problem that could develop here if we happened to use an unsigned char however, because C usually returns a minus one for an EOF – which an unsigned char type variable is not

capable of containing. An unsigned char type variable can only have the values of zero to 255, so it will return a 255 for a minus one in C. This is a very frustrating problem to try to find. The program can never find the EOF and will therefore never terminate the loop. This is easy to prevent: always have a char or int type variable for use in returning an EOF. There is another problem with this program but we will worry about it when we get to the next program and solve it with the one following that.


After you compile and run this program and are satisfied with the results, it would be a good exercise to change the name of TENLINES.TXT and run the program again to see that the NULL test actually works as stated. Be sure to change the name back because we are still not finished with TENLINES.TXT.

File Handling


In C++ we say data flows as streams into and out of programs. There are different kinds of streams of data flow for input and output. Each stream is associated with a class, which contains member functions and definitions for dealing with that particular kind of flow. For example, the if stream class represents the input disc files,. Thus each file in C++ is an object of a particular stream class.

The stream class hierarchy


The stream classes are arranged in a rather complex hierarchy. You do not need to understand this hierarchy in detail to program basic file I/O, but a brief overview may be helpful. We have already made extensive use of some of these classes. The extraction operator >> is a member of istream class and the insertion operator is a member of ostream class. Both of these classes are derived from the ios class. The cout object is a predefined object of the ostream with assign class. It is in turn derived from ostream class. The classes used for input and output to the video display and keyboard are declared in the header file IOSTREAM.H, which we have routinely included in all our programs.

Stream classes


The ios class is the base class for the entire I/O hierarchy. It contains many constants and member functions common to input and output operations of all kinds. The istream and ostream classes are derived from ios and are dedicated to input and output respectively their member functions perform both formatted and unformatted operations. The iostream class is derived from both istream and ostream by multiple inheritance, so that other classes can inherit both of these classes from it. The classes in which we are most interested for file I/O are ifstream for input files ofsteam for output files and fstream for files that will be used for both input and output the ifstream and ofsteam classes are declared in the FSTREAM.H. file.

The isteam class contains input functions such as
  • getline( )
  • getine( )
  • read( )

and overloaded extraction operators.

The ostream class contains functions such as
  • Put( )
  • write( )

and overloaded insertor.

Writing strings into a file


Let us now consider a program which writes strings in a file.

//program for writing a string in a file

#include<fstream.h>

void main( )

{

ofstream outfile("fl.fil");//create a file for output

outfile<<"harmlessness, truthfulness, calm"<<endl;

outfile<<"renunciation, absence of wrath and fault-finding"<<endl;

outfile<<"compassion for all, non-covetousness, gentleness, modesty"<<endl;

outfile<<"stability. vigour, forgiveness, endurance, cleanliness"<<endl;

outfile<<"absence of malice and of excessive self-esteem"<<endl;

outfile<<"these are the qualities of godmen"<<endl;

}


In the above program, we create an object called outfile, which is a member of the output file stream class. We initialise it to the filename “fl.fil”. You can think of outfile as a user-chosen logical name which is associated with the real file on disc called “fl.fil”. When any automatic object (outfile is automatic) is defined in a function, it is created in the function and automatically destroyed when the function terminates. When our main ( ) function ends, outfile goes out of scope. This automatically calls the destructor, which closes the file. It may be noticed that we do not need to close the file explicitly by any close-file command. The insertion operator << is overloaded in ofsteam and works with objects defined from ofstream. Thus, we can use it to output txt to the file. The strings are written in the file “fl. fil? in the ASCII mode. One can see it from DOS by giving the type command. The file “fl. fil” looks as shown below

harmlessness, truthfulness, calm renunciation, absence of wrath and fault-finding compassion for all, non-covetousness, gentleness, modesty stability, vigour, forgiveness, endurance, cleanliness absence of malice and of excessive self-esteem these are the qualities of godmen.

Reading strings from file in C++


The program below illustrates the creation of an object of ifstream class for reading purpose.

//program of reading strings

#include <fstream.h> //for file functions

void main( )

{

const int max = 80; //size of buffer

char buffer[max]; //character buffer

ifstream. infile("fl.fil")- //create file for input

while (infile) //until end-of-file

{

infile.getline(buffer,max); //read a line of text

cout<<buffer

}

}


We define infile as an ifstream object to input records from the file “fl.fil”. The insertion operator does not work here. Instead, we read the text from the file, one line at a time, using the getline( ) function. The getline ( ) function reads characters until it encounters the ? \n? character. It places the resulting string in the buffer supplied as an argument. The maximum size of the buffer is given as the second argument. The contents of each line are displayed after each line is input. Our ifstream object called infile has a value that can be tested for various error conditions -one is the end-of-file. The program checks for the EOF in the while loop so that it can stop reading after the last string.

What is a buffer?


A buffer is a temporary holding area in memory which acts as an intermediary between a program and a file or other I/0 device. Information can be transferred between a buffer and a file using large chunks of data of the size most efficiently handled by devices like disc drives. Typically, devices like discs transfer information in blocks of 512 bytes or more, while program often processes information one byte at a time. The buffer helps match these two desperate rates of information transfer. On output, a program first fills the buffer and then transfers the entire block of data to a hard disc, thus clearing the buffer for the next batch of output. C++ handles input by connecting a buffered stream to a program and to its source of input. similarly, C++ handles output by connecting a buffered stream to a program and to its output target.


Using put( ) and get( ) for writing and reading characters


The put ( ) and get( ) functions are also members of ostream and istream. These are used to output and input a single character at a time. The program shown below is intended to illustrate the use of writing one character at a time in a file.

//program for writing characters

#Include <fstream.h>

#include <string.h>

void main( )
{

charstr[] = "do unto others as you would be done by ;

ofstream outfile("f2.fil");

for(int i =0; i<strlen(str); i++)

outfile put(str[i]);

}

In this program, the length of the string is found by the strlen( ) function, and the characters are output using put( function in a for loop. This file is also an ASCII file.

Reading Characters


The program shown below illustrates the reading of characters from a file.
//program for reading characters of a string

#Include <fstream.h>

void main( )

{
char ch;

ifstream in file("f2.fil")?

while(infile)

infile.get(ch);

cout<<ch;

}

The program uses the get( ) and continues to read until eof is reached. Each character read from the file is displayed using cout. The contents of file f2.fil created in the last program will be displayed on the screen.

Writing an object in a files


Since C++ is an object-oriented language, it is reasonable to wonder how objects can be written to and read from the disc. The program given below is intended to write an object in a file.
//program for writing objects in files

#include<fstream.h>

class employees

{

protected:
int empno;

char name[10];

char dept[5];

char desig[5];

double basic;

double deds;

 
public:

void getdata(void)

{
coul<<endl<<"enter empno";cin>>empno;"

cout<<endl<<"enter empname";cin>>name;

cout<<endl<<"enter department ";

cin>>dept; cout<<endl<<"enter designation ";

cin>>desig; cout<<endl<<"enter basic pay ";cin>>basic:

cout<<endl<<"enter deds ";cin>>deds;

}

void main(void)
{

employees emp;

emp.getdata( );

ofstream outfile("f3.fil");

outfile. write((char * )&emp,sizeof(emp));

}

This program uses a class by name employees and an object by name emp. Data can be written only by the function inside the object. This program creates a binary data file by name f3.fil.The write( ) function is used for writing. The write( ) function requires two arguments, the address of the object to be written, and the size of the object in bytes. We use the size of operator to find the length of the emp object. The address of the object must be cast to type pointer to char.

Binary vs. Character files


You might have noticed that write( ) was used to output binary values, not just characters. To clarify, let us examine this further. Our emp object contained one int data member, three string data members and two double data members. The total number of bytes occupied by the data members comes to 38. It is as if write( ) took a mirror image of the bytes of information in memory and copied them directly to disc, without bothering any intervening translation or formatting. By contrast, the character based functions take some liberties with the data. for example, they expand the? \n ?character into a carriage return and a line feed before storing it to disk.

Reading object from file


The program given below is intended to read the file created in the above program.

//program f or reading data files

#include < istream.h>

class employees

{

protected:

int empno;

char name[I0]

char dept[5];

char desig[5];

double basic;

double deds;


public:

void showdata(void)

{cout<<endl<<"employeenumber: "<<empno;

cout<<endl<<"empname "<<name;

cout<<endl<<"department "<<dept;

cout<<endl<<"designation "<<desig;

cout<<endl<<"basic pay "<<basic:

cout<<endl<<"deds "<<deds;}


void main(void)
{

employees empl;

ifstream infile("f3.fil");

infile.read((char*)&empl, sizeof(empl));

empl.showdata( );

}


It may be noticed that both read( )and write( functions have similar argument. we must specify the address in which the disc input will be placed. We also use size of to indicate the number of bytes to be read.


The sample output looks as shown below:

employeenumber; 123

empname venkatesa

department elec

designation prog

basic pay 567.89

deds 45.76

Example


This is a small C language program that can read a text file. The program is given file name as command parameter and it reads the file line by line. The program will print out number of characters and words in each line. The program will also print out the number of lines in the text file. File name should contain complete physical path or only the file name if file is present in the same directory as program.


#include <stdio.h>

int main (int argc, char *argv[]){

  FILE *fp;

  int nchars, nwords, nlines;

  int lastnblank;    /* 0 iff the last character was a space */

  char c;

  if(argc!=2)
{

    printf("Usage: %s filenamen", argv[0]);

    exit(0);

  }

  if((fp=fopen(argv[1],"r"))==NULL)
{

    perror("fopen");

    exit(0);

  }

  nchars=nwords=nlines=lastnblank=0;

  while((c=getc(fp))!=EOF){

    nchars++;

    if (c=='n')
{

      if (lastnblank)nwords++;

      printf("words=%d, characters=%dn", nwords, nchars);

      nchars=nwords=lastnblank=0;

      nlines++;
           }
           else
           {

      if (((c==' ')||(c=='t'))&(lastnblank))
            nwords++;

     lastnblank=((c!=' ')&&(c!='t'));
            }
   }

printf("lines=%dn", nlines);

fclose(fp);
}

Opening a File
FILE* f; // create a new file pointer
if((f=fopen("file","w"))==NULL)
{ // open a file
printf("could not open file"); // print an error
exit(1);
}

The function fopen() is defined as FILE *fopen(const char *filename,const char *mode);. On success it returns a valkid FILE * (file pointer) or NULL if the file, as indicated by the filename, is not found. It takes the filename as first, and as a second argument, one of the following modes:

  • r - open a file in read-mode, set the pointer to the beginning of the file.
  • w - open a file in write-mode, set the pointer to the beginning of the file.
  • a - open a file in write-mode, set the pointer to the end of the file.
  • rb - open a binary-file in read-mode, set the pointer to the beginning of the file.
  • wb - open a binary-file in write-mode, set the pointer to the beginning of the file.
  • ab - open a binary-file in write-mode, set the pointer to the end of the file.
  • r+ - open a file in read/write-mode, if the file does not exist, it will not be created.
  • w+ - open a file in read/write-mode, set the pointer to the beginning of the file.
  • a+ - open a file in read/append mode.
  • r+b - open a binary-file in read/write-mode, if the file does not exist, it will not be created.
  • w+b - open a binary-file in read/write-mode, set the pointer to the beginning of the file.
  • a+b - open a binary-file in read/append mode.

The maximum number of files that can be opened simultaneously is defined as FOPEN_MAX.
Closing a file is done using the fclose() function, which is defined as int fclose(FILE *fp);.

fclose(f); // close the filepointer

Writing to files can be done in various ways:
  • putc() - like fputc()
  • fputc() - int fputc (int character, FILE * stream); - write a character to a file
  • fputs() - int fputs (const char * string , FILE * stream); - write a string to a file
  • fprintf()- int fprintf (FILE * stream , const char * format
[ , argument , ...] ); - works like printf() except that it writes to a file instead of
  • STDOUT.

just like reading from files:
  • getc() - like fgetc()
  • fgetc() - int fgetc (FILE * stream); - write a character to a file
  • fgets() - char * fgets (char * string , int num , FILE * stream); - write a string to a file
  • fscanf() - int fscanf ( FILE * stream , const char * format

[ , argument , ...] ); - works like scanf() except that it reads from a file instead of STDIN


#include <stdio.h>

int main()
{

char test[]="a teststring\0";

FILE *f;                             // create a new filepointer

if((f=fopen("file","w"))==NULL)
{    // open a file

  printf("could not open file");      // print an error

  exit(1);
  }

fputc(test[0],f);

fputs(test,f);

fprintf(f,"\n%ch%c%c %c%c %s\n",test[2],test[9],test[4],test[9],test[4],test);

fclose(f);

if((f=fopen("file","r"))==NULL) 
{                   // open a file

  printf("could not open file");    // print an error

  exit(1);

}

char ch;

ch=fgetc(f);

printf("%c\n",ch);

fgets(test,20,f);

printf("%s",test);

while(!feof(f)) 
{

  test[0]='\0';

  fscanf(f,"%s",test);

  printf("%s ",test);

};

fclose(f);

return 1;

}

Thursday, March 25, 2010

Search Engines

GenXTechno Tags: , , ,
What is a search engine?
A search engine is a coordinated set of programmes that includes:
  • A spider (also called a "crawler" or a "bot") that goes to every page or representative pages on every Web site that wants to be searchable and reads it, using hypertext links on each page to discover and read a site's other pages
  • A program that creates a huge index (sometimes called a "catalogue") from the pages that have been read
  • A program that receives your search request, compares it to the entries in the index, and returns results to you
An alternative to using a search engine is to explore a structured directory of topics. Yahoo, which also lets you use its search engine, is the most widely-used directory on the Web. A number of Web portal sites offer both the search engine and directory approaches to finding information.

Crawler-based search engines

Crawler-based search engines, such as Google, create their listings automatically. They "crawl" or "spider" the web, then people search through what they have found.
If you change your web pages, crawler-based search engines eventually find these changes, and that can affect how you are listed. Page titles, body copy and other elements all play a role.
Crawler-based search engines have three major elements. First is the spider, also called the crawler. The spider visits a web page, reads it, and then follows links to other pages within the site. This is what it means when someone refers to a site being "spidered" or "crawled." The spider returns to the site on a regular basis, such as every month or two, to look for changes.
Everything the spider finds goes into the second part of the search engine, the index. The index, sometimes called the catalogue, is like a giant book containing a copy of every web page that the spider finds. If a web page changes, then this book is updated with new information.
Sometimes it can take a while for new pages or changes that the spider finds to be added to the index. Thus, a web page may have been "spidered" but not yet "indexed." Until it is indexed -- added to the index -- it is not available to those searching with the search engine.

Human-powered directories

A human-powered directory, such as the Open Directory, depends on humans for its listings. You submit a short description to the directory for your entire site, or editors write one for sites they review. A search looks for matches only in the descriptions submitted.

"Hybrid search engines" or mixed results

In the web's early days, it used to be that a search engine either presented crawler-based results or human-powered listings. Today, it extremely common for both types of results to be presented. Usually, a hybrid search engine will favour one type of listings over another. For example, MSN Search is more likely to present human-powered listings from LookSmart. However, it does also present crawler-based results (as provided by Inktomi), especially for more obscure queries.
Different search engine approaches
  • Major search engines index the content of a large portion of the web and provide results that can run for pages - and consequently overwhelm the user.
  • Specialized content search engines are selective about what part of the web is crawled and indexed. They provide provide a shorter but more focused list of results.
  • Ask Jeeves provides a general search of the web but allows you to enter a search request in natural language, such as "What's the weather in Seattle today?"
  • Special tools and some major websites such as Yahoo let you use a number of search engines at the same time and compile results for you in a single list.

How Search Engines Work

The term "search engine" is often used generically to describe both crawler-based search engines and human-powered directories. These two types of search engines gather their listings in radically different ways.

Crawler-Based Search Engines

Crawler-based search engines, such as Google, create their listings automatically. They "crawl" or "spider" the web, then people search through what they have found.
If you change your web pages, crawler-based search engines eventually find these changes, and that can affect how you are listed. Page titles, body copy and other elements all play a role.

Human-Powered Directories

A human-powered directory, such as the Open Directory, depends on humans for its listings. You submit a short description to the directory for your entire site, or editors write one for sites they review. A search looks for matches only in the descriptions submitted.
Changing your web pages has no effect on your listing. Things that are useful for improving a listing with a search engine have nothing to do with improving a listing in a directory. The only exception is that a good site, with good content, might be more likely to get reviewed for free than a poor site.

"Hybrid Search Engines" Or Mixed Results

In the web's early days, it used to be that a search engine either presented crawler-based results or human-powered listings. Today, it extremely common for both types of results to be presented. Usually, a hybrid search engine will favor one type of listings over another. For example, MSN Search is more likely to present human-powered listings from LookSmart. However, it does also present crawler-based results (as provided by Inktomi), especially for more obscure queries.

The Parts Of A Crawler-Based Search Engine

Crawler-based search engines have three major elements. First is the spider, also called the crawler. The spider visits a web page, reads it, and then follows links to other pages within the site. This is what it means when someone refers to a site being "spidered" or "crawled." The spider returns to the site on a regular basis, such as every month or two, to look for changes.
Everything the spider finds goes into the second part of the search engine, the index. The index, sometimes called the catalog, is like a giant book containing a copy of every web page that the spider finds. If a web page changes, then this book is updated with new information.
Sometimes it can take a while for new pages or changes that the spider finds to be added to the index. Thus, a web page may have been "spidered" but not yet "indexed." Until it is indexed -- added to the index -- it is not available to those searching with the search engine.
Search engine software is the third part of a search engine. This is the program that sifts through the millions of pages recorded in the index to find matches to a search and rank them in order of what it believes is most relevant. You can learn more about how search engine software ranks web pages on the aptly-named How Search Engines Rank Web Pages page.

Search Engine Placement Tips

· Pick Your Target Keywords
· Position Your Keywords
Make sure your target keywords appear in the crucial locations on your web pages. The page's HTML title tag is most important. Failure to put target keywords in the title tag is the main reason why perfectly relevant web pages may be poorly ranked. More about the title tag can be found on the How To Use HTML Meta Tags page.
· Create Relevant Content
Changing your page titles is not necessarily going to help your page do well for your target keywords if the page has nothing to do with the topic. Your keywords need to be reflected in the page content.
· Avoid Search Engine Stumbling Blocks
Some search engines see the web the way someone using a very old browser might. They may not read image maps. They may not read frames. You need to anticipate these problems, or a search engine may not index any or all of your web pages.
· Frames Can Kill
Some of the major search engines cannot follow frame links. Make sure there is an alternative method for them to enter and index your site, either through meta tags or smart design.

COMMUNICATION SOFTWARES & INTERNET TOOLS

Communication software is used to provide remote access to systems and exchange files and messages in text, audio and/or video formats between different computers or user IDs. This includes terminal emulators, file transfer programs, chat and instant messaging programs, as well as similar functionality integrated within MUDs (multi-user dungeonN).
Email Software - all types for email software
Advanced Email Verifier - Keep your mailing lists and address books clean with this powerful tool that eliminates non-working email addresses reliably.
Advanced Email Parser - software allows to create the incoming email processing system automaticaly.
G-Lock Easy Mail – it is a bulk email software that can be used for email marketing. Messages are sent directly from your PC to the recipient's mail server (without using any ISP's SMTP server).
PageGate - messaging server software sends email messages or email notification to pagers and cell phones.
SpyMail - messaging server software sends email messages or email notification to pagers and cell phones.
Poco Mail - a specific focus: to allow you to take full potential of e-mail, whether you get one or one hundred messages a day
I am a Big Brother - monitor your children's instant messages, emails, web surfing and much more while undetected by the user.
Text Aloud - converts any text into spoken words. Instead of the valuable time you spend reading on your email it is read to you!
KeyLog Pro - Secretly record chats, emails, instant messages, keystrokes, Hotmail, AOL emails, Yahoo! chat and AOL chat
Wireless Software - all types of wireless related software.
PageGate - network paging gateway that allows text messages to be sent to cell phones, pagers and PIMs from any combination of six different interfaces (e-mail, web, GUI, TAP-in, Serial and commandline).
NotePager Pro - send text or SMS messages to pagers, mobile phones, and PIMs using an easy to use desktop application.
NotePager Net - full feature network paging software that allows for all users on a network to share a commmon modem, phone line and database to facilitate the sending of text messages to pagers, cellular phones or other messaging devices.
Broadcast Software - including MP3s, audio recording and call recording software
Telephony Software - IVR and telephony related applications
Call Corder - records telephone conversations directly to your hard disk with a single push of button, optionally playing a legal disclaimer before recording. It stores calls as standard Windows sound files, adding a memo to allow fast and easy call navigation.
Modem Spy - handy utility for recording phone conversations. It can playback recorded messages via modem or via sound card. There is an option to record all incoming and outgoing calls. You will need voice modem in order to run program.
Extra Dialer Pro - extra Dialer is a unique program that uses advanced technology for voice messages broadcasting. It provides the most cost effective and efficient technology solution for performing important yet sometimes repetitive, as well as time and cost consuming tasks.
Advanced Call Center - easy-to-use answering machine software for your voice modem. All necessary functions are supported: Caller ID lets you see and hear who's calling via screen pop-ups, distinctive rings and caller's name announcement with speech synthesis.
Active Phone Server -an application designed to manage your incoming and outgoing phone calls. All essential features are supported: advanced answering machine, caller ID function which displays a caller's information when a call is received, and enables you to customize voices and melodies for each phone number.
PhoneWorks - an easy-to-use and powerful telephone, voice mail answering system, and fax messaging solution for your PC. PhoneWorks solves your messaging problems by dramatically simplifying how you read, listen to, and manage your daily information.
Internet Communication Software - all types of internet communication software
Instant Communication Software - including web based instant messaging, and peer to peer messaging.
Voicemail Software - voicemail software solutions and recording
Messaging Software - all types of messaging and related programs
Mass Communication - tools and utilities for mass communication
Paging Software - software for paging
SMS Communication Software - SMS messaging related software

INTERNET TOOLS


  • Browser/Server security information - for reading all header and Javascript information available from web page or your browser
  • Web Content analysis - complex analysis of a web page content. Highly recommened for web developers!
  • Web Traffc analysis - tons of information about site popularity on the Internet (historical traffic, ranking, page views and more). Highly recommened for web marketing!
  • Geographical IP Lookup - it shows ability to find geographical location based on IP address. It displays interactive map with address information.

Tools

Applications, clients and servers; enable storage of information on servers for access by users who have client software for a range of purposes more complex than just utility tasks
  • Serverwatch: Information about Internet servers; includes news, downloads, and reviews of Web, mail, news, ftp, and other servers; part of internet.com
  • FTP RFC: File Transfer Protocol (FTP); describes FTP terms and operation; Request for Comments 959, J. Reynolds, J. Reynolds, October 1985
  • FTPplanet: Directory of File Transfer Protocol information and software; includes instruction guides, help, and technical information about using FTP
  • FTP Clients: File Transfer Protocol software; description and reviews of downloadable software for transferring files over the Internet; from internet.com portal
  • Telnet RFC: The Remote User Telnet Service; describes the telnet protocol and its operation; Request for Comments 818, J. Postel, November 1982
  • Telnet Clients: Telnet software; description and reviews of downloadable software for allowing you to login to a remote computer host and use it as if on a terminal; from internet.com portal
  • WWW: World Wide Web Consortium; develops interoperable technologies (specifications, guidelines, software, and tools) for the World Wide Web
  • WWW Browsers: Web Browsers; a list of downloads for Web browser software with reviews; from internet.com portal
  • Web reference: Web technician's reference; includes discussion of technical issues involved in Web implementation
  • Web Development: Web content development; discussion of a methodology for Web content development involving issues of audience and purpose
  • HTML Station: HTML reference; includes demonstrations, tutorials, codes, specification summaries, techniques/technologies descriptions, and supporting information about hypertext markup language (HTML) and related technologies

Wednesday, March 10, 2010

Multimedia

Multimedia is media and content that uses a combination of different content forms. The term can be used as a noun (a medium with multiple content forms) or as an adjective describing a medium as having multiple content forms. The term is used in contrast to media which only use traditional forms of printed or hand-produced material. Multimedia includes a combination of text, audio, still images, animation, video, and interactivity content forms.
Definitions:-
“As the name implies, multimedia is the integration of multiple forms of media. This includes text, graphics, audio, video, etc”.
For example, a presentation involving audio and video clips would be considered a "multimedia presentation." Educational software that involves animations, sound, and text is called "multimedia software." CDs and DVDs are often considered to be "multimedia formats" since they can store a lot of data and most forms of multimedia require a lot of disk space.
“Information in more than one form. It includes the use of text, audio, graphics, animation and full-motion video. Multimedia programs are typically games, encyclopedias and training courses on CD-ROM or DVD. However, any application with sound and/or video can be called a multimedia program.”
History of the term
The term "multimedia" was coined by Bob Goldstein (later 'Bobb Goldsteinn') to promote the July 1966 opening of his "LightWorks at L'Oursin" show at Southampton, Long Island. On August 10, 1966, Richard Albarino of Variety borrowed the terminology, reporting: “Brainchild of songscribe-comic Bob (‘Washington Square’) Goldstein, the ‘Lightworks’ is the latest multi-media music-cum-visuals to debut as discotheque fare.”. Two years later, in 1968, the term “multimedia” was re-appropriated to describe the work of a political consultant, David Sawyer, the husband of Iris Sawyer—one of Goldstein’s producers at L’Oursin.
Multimedia Application
Multimedia can be used for entertainment, corporate presentations, education, training, simulations, digital publications, museum exhibits and so much more. With the advent multimedia authoring applications like Flash, Shockwave and Director amongst a host of other equally enchanting applications, your multimedia end product is only limited by your imagination.
Multimedia Education
Definition: Multimedia combines five basic types of media into the learning environment: text, video, sound, graphics and animation, thus providing a powerful new tool for education.
Classroom Architecture and Resources
Contents:

  • The Trend Towards Online Multimedia Education and Its Advantages Over Traditional Methods

  • Framework of an Online Multimedia Education System

  • Innovative Item Types for Learning and Testing

  • Educational Games

  • Item Shells for Automatic Generation of Multiple Items

  • Testing Intelligence and Problem Solving Skills

  • Student Modeling

  • Adaptive Testing and Item Response Theory

  • Educational Item Authoring

  • Multimedia Education on Mobile Devices

  • Human Computer Interaction, Affective Education and User Evaluation
Multimedia Design Training
Multimedia presentations are a great way to introduce new concepts or explain a new technology. In companies, this reduces the desi Design and Training time of multimedia. Individuals find it easy to understand and use.
Multimedia Entertainment
The field of entertainment uses multimedia extensively. One of the earliest applications of multimedia was for games. Multimedia made possible innovative and interactive games that greatly enhanced the learning experience. Games could come alive with sounds and animated graphics.
Multimedia Business
Even basic office applications like a word processing package or a spreadsheet tool becomes a powerful tool with the aid of multimedia business. Pictures, animation and sound can be added to these applications, emphasizing important points in the documents.
Miscellaneous
Virtual reality is a truly absorbing multimedia application. It is an artificial environment created with computer hardware and software. It is presented to the user in such a way that it appears and feels real. In virtual reality, the computer controls three of the five senses. Virtual reality systems require extremely expensive hardware and software and are confined mostly to research laboratories.
Another multimedia application is videoconferencing. Videoconferencing is conducting a conference between two or more participants at different sites by using computer networks to transmit audio and video data.

Multimedia Systems and Multimedia Programming

A complex multimedia production, whether a video game, a multimedia encyclopaedia or a “location-based entertainment environment,” often requires the concerted effort of large teams of people. Like film and video production, multimedia production calls upon the talents of artists, actors, musicians, script writers, editors and directors. These people, responsible for “content design” to use current terminology, create raw material and prepare it for presentation and interaction. In doing so they rely on multimedia authoring environments to edit and compose digital media.
The authoring environments used for multimedia production are examples of multimedia systems . Some other examples are:
multimedia database systems — used to store and retrieve, or better, to “play” and “record” digital media;
hypermedia systems — used to navigate through interconnected multimedia material;
video-on-demand systems — used to deliver interactive video services over widearea networks.
The design and implementation of the above systems, and other systems dealing with digital media, forms the domain of multimedia programming.
Multimedia programming is based on the manipulation of media artefacts through software. One of the most important consequences arising from the digitization of media is that artefacts are released from the confines of studios and museums and can be brought into the realm of software. For instance, the ordinary spreadsheet or word processor no longer need content itself with simple text and graphics, but can embellish its appearance with high-resolution colour images and video sequences. (Although the example is intended somewhat facetiously, we should keep in mind that digital media offer many opportunities for abuse. Just as the inclusion of multiple fonts in document processing systems led to many “formatting excesses,” so the ready availability of digital media can lead to their gratuitous use.)
With the appearance of media art facts in software applications, programmers are faced with new issues and new problems. Although recent work in data encoding standards, operating system design and network design has identified a number of possible services for supporting multimedia applications, the application programmer must still be aware of the capabilities and limitations of these services. Issues influencing application design include:
Media composition — digital media can be easily combined and merged. Among the composition mechanisms found in practice are: spatial composition (the document metaphor) which deals with the spatial layout of media elements; temporal composition (the movie metaphor) considers the relative positioning of media elements along a temporal dimension; procedural composition (the script metaphor) describes actions to be performed on media elements and how media elements react to events; and semantic composition (the web metaphor) establishes links between related media elements.
Media synchronisation — media processing and presentation activities often have synchronisation constraints [10][13]. A familiar example is the simultaneous playback of audio and video material where the audio must be “lip synched” with the video. In general, synchronisation cannot be solved solely by the network or operating system and, at the very least, application developers must be aware of the synchronisation requirements of their applications and be capable of specifying these requirements to the operating system and network.
User-interfaces — multimedia enriches the user-interface but complicates implementation since a greater number of design choices are available. For example, questions of “look-and-feel” and interface aesthetics must now take into account audio, video and other digital media, instead of just text and graphics. Multimodal interaction [2], where several “channels” can be used for information presentation, is another challenge in the design of multimedia user-interfaces.
Compression schemes — many techniques are currently used, some standard and some proprietary, for the compression of digital audio and video data streams. Application developers need to be aware of the various performances and quality trade-offs among the numerous compression schemes.
Database services — application programming interfaces (APIs) for multimedia databases are likely to differ considerably from the APIs of both traditional databases and the more recent object-oriented databases. For example, it has been argued that multimedia databases require asynchronous, multithreaded APIs [6] as opposed to the more common synchronous and single-threaded APIs (where the application sends the database a request and then waits for the reply). The introduction of concurrency and asynchrony has a major impact on application architecture.
Operating system and network services — recent work on operating system support for multimedia — see Tokuda [14] for an overview — proposes a number of new services such as real-time scheduling and stream operations for time-based media. Similarly, research on “multimedia networks” (e.g. [4], [12]) introduces new services such as multicasting and “quality of service” (QoS) guarantees. Developers must consider these new services and their impact on application architecture.
Platform heterogeneity — cross-platform development, and the ability to easily port an application from one platform to another, are important for the commercial success of multimedia applications. It is also desirable that multimedia applications adapt to performance differences on a given platform (such as different processor speeds, device access times and display capabilities).
In summary, a rich set of data representation, user interface, application architecture, performance and portability issues face the developers of multimedia systems. What we seek from environments for multimedia programming are high-level software abstractions that help developers explore this wide design space.
Multimedia Frameworks
We now look at a particular multimedia framework — one that provides explicit support for component-oriented software development. This framework is described more fully elsewhere [5]. In essence it consists of four main class hierarchies: media classes, transform classes, format classes and component* classes discuss below.
Media classes correspond to audio, video and the other media types. Instances of these classes are particular media values — what were called media artefacts earlier in the chapter.
Transform classes represent media operations in a flexible and extensible manner. For example, many image editing programs provide a large number of filter operations with which to transform images. These operations could be represented by methods of an image class; however, this makes the image class overly complicated and adding new filter operations would require modifying this class. These problems are avoided by using separate transform classes to represent filter operations.
Format classes encapsulate information about external representations of media values. Format classes can be defined for both file formats (such as GIF and TIFF, two image file formats) and for “stream” formats (for instance, CCIR 601 4:2:2, a stream format for uncompressed digital video).
Component classes represent hardware and software resources that produce, consume and transform media streams. For instance, a CD-DA player is a component that produces a digital audio stream (specifically, stereo 16 bit PCM samples at 44.1 kHz).
Components are central to the framework for two reasons. First, the framework is adapted to a particular platform by implementing component classes that encapsulate the media processing services found on the platform. Second, applications are constructed by instantiating and connecting components. The remainder of this section looks at compo-nents in more detail.
Media
Text
Image
Binary Image
Gray Scale Image
Colour Image
Graphic
2dGraphic
3dGraphic
Temporal Media
Audio
Raw Audio
Compressed Audio
Video
Raw Video
Compressed Video
Animation
Event Based Animation
Scene Based Animation
Music
Event Based Music
Score Based Music
Transform
Image Transform
Audio Transform
Video Transform
Format
Text Format
Image Format
Graphic Format
Temporal Media Format
Audio Format
Video Format
Animation Format
Music Format
Component
Producer
Consumer
Transformer

Multimedia Authoring

Definition: Multimedia authoring involves collating, structuring and presenting information in the form of a digital multimedia, which can incorporate text, audio, and still and moving images.
The driving force behind all authoring is the human need to communicate. Verbal, pictorial, sign and written languages have provided the means to communicate meaning since time immemorial. Today we can employ multimedia systems to combine text, audio, still and moving images to communicate. Computer-based digital multimedia systems not only provide the means to combine these multiple media elements seamlessly, but also offer multiple modalities for interacting with these elements. The cross-product of these multiple elements and modalities gives rise to a very large number of ways in which these can be combined.
Who is the Author?
A movie is created by a series of transformations. The inspiration and ideas for a story come from life. The Writer uses life experiences to create a story plot; at this stage the Writer is a user, while Life is the author. The Writer then writes a film script, or screenplay, which is used by the Director. Then the Director becomes the author of the raw footage based on the script. Often people consider the Director as the ultimate author of a movie; if this was true, then we should all be happy watching the raw footage. It is the Editor who puts this raw footage together to make the complete movie that can be watched as a meaningful presentation. Therefore, we can say that the Editor is the final author of the movie. However, with a videocassette or a DVD, the Borrower can use the remote control and change the order in which the various scenes are viewed. Now the Borrower is the author, and the other home viewers (deprived of the remote control) are the Users.
Interactive multimedia systems provide the users with the ability to change the presented content, making them the final Authors of the presentation. However, with the ability to easily manipulate multimedia content, new collaborative authoring paradigms are constantly being invented, based on the ideas of remixing and Open Source software.
Authoring Dimensions
These three dimensions, namely, temporal, spatial and digital dimensions are not entirely orthogonal. Therefore, changes in one dimension can effect the composition in the other dimensions.
The temporal dimension relates to the composition of the multimedia presentation in time. The main aspect of the temporal composition is the narrative, which is akin to the plot of a story. In traditional media – such as a novel or a movie – the narrative is fixed, and the user is expected to traverse the narrative as per the predetermined plot. In interactive multimedia systems, the user is given the ability to vary the order in which the content is presented; in other words, the user can change the narrative. The Movement Oriented Design (MOD) paradigm provides a model for the creation of temporal composition of multimedia systems.
The spatial dimension deals with the placement and linking of the various multimedia elements on each ‘screen’. This is similar to the concept of mis e scĂ©ne used by the film theorists. In a time varying presentation – such as a movie or an animation – the spatial composition changes continuously: most of the time the change is smooth, and at other times the change is abrupt, i.e. a change of scene. The spatial composition at any point in time must relate to the narrative, or the plot of the temporal composition, while fulfilling the aims and objects of the system. The Multimedia Design and Planning Pyramid (MUDPY) model provides a framework for developing the content starting with a concept.
The digital dimension relates to coding of multimedia content, its meta-data, and related issues. Temporal and spatial composition was part of pre-digital multimedia designs as well, e.g. for films, slide shows, and even the very early multimedia projection systems called the Magic Lantern. The digital computer era, particularly over the last two decades has provided much greater freedom in coding, manipulating, and composing digitized multimedia content. This freedom brings with it the responsibility of providing meaningful content that does not perform fancy ‘bells and whistles’ (e.g. bouncing letter, or dancing eyeballs) just for the sake of it. The author must make sure that any digital artifact relates to the aims and objectives of the presentation.

Authoring Processes

Authors aim to convey some ideas or new meanings to their audience. All authoring systems require a process that the author needs to follow, to effectively convey their ideas to the consumers of the content. Novels, movies, plays are all ‘Cultural Interfaces’ that try to tell a story. Models of processes for creating good stories have been articulated for thousands of years. Nonetheless, some scholars stand out, such as Aristotle, who over 2300 years ago wrote Poetics, a seminal work on authoring. Robert McKee details story authoring processes as applied to screenplay writing. Michael Tierno shows how Aristotle’s ideas for writing tragedies can be applied to creating good screenplays. Dramatica is a new theory of authoring, based on the problem solving metaphor.
Processes involved in creating a meaningful digital multimedia presentation have evolved from the processes used in other media authoring systems; and some of these are used as metaphors for underpinning the process of creating multimedia. For example, PowerPoint uses the slideshow metaphor, as it relates to lecture presentations based on the (optical) slide projector. Multimedia authoring is one of the most complex authoring processes, and to some extent not as well grounded as those for the more traditional media. The following sections present two authoring models developed for supporting the process of authoring multimedia systems.
Conclusion
Authoring multimedia is much more complex than authoring traditional media. Collaboration between various parties is necessary for authoring any significant multimedia system. There are three multimedia-authoring dimensions: temporal, spatial and digital. These dimensions interact with each other in complex ways. The Movement Oriented Design (MOD) methodology uses story-telling concepts to develop the narrative of a multimedia system in the temporal dimension. Multimedia Design and Planning Pyramid (MUDPY) is model that supports systematic planning, design and production of multimedia projects. Multimedia project planning and design components include: Concept statement, Goals, Requirements, Target Audience, Treatment, Specifications, Storyboard, Navigation, Task Modeling, Content Gathering, Integration, and Testing. The MUDPY model exposes the relationship between these multimedia authoring aspects, suggests the order in which these should be tackled, and thus, supports cooperation between the members of a multimedia authoring team.

Authoring Tools

Selecting an authoring system is a complex procedure. Therefore, locating a number of standards that a multimedia authoring package could meet would mean simplifying the whole concept.
A substantial effort by Preclik (2002) produced the following variables:
(1) Variety of designed applications: Usually, less sophisticated authoring tools offer only the ability to design applications identical to one another. Of course, this is a result of the efforts to minimize package complexity which leads to a subsequent drop of the abilities’ standard.
(2) User interface: Normally, a good interface presents itself in two modes (at least): The “beginner mode,” with only the basic capabilities, and the “expert mode,” which offers all available features.
(3) Test questions: Rather than offering just plain multiple-choice questions, complex systems distinguish themselves by offering much more: hotspot questions, drag-and-drop questions, short-answer questions, true/false questions, etc.
List of some examined authoring tools
Program Company / Price OS
1 Authorware Macromedia $2,999 Windows/Mac
2 CBTMaster (Lessons) SPI $49 Windows
3 DazzlerMax Deluxe MaxIT Co. $1,995 Windows
4 Director Macromedia $1,199 Windows/Mac
5 EasyProf EasyProf €1,105 Windows
6 eZediaMX eZedia $169 Windows/Mac
7 Flash Macromedia $499 Windows/Mac
8 Flying Popcorn Parasys $149 Windows
9 Formula Graphics FGX $49.95 Windows
Multimedia
10 HyperMethod HyperMethod $190 (standard)-$390 (pro) Windows
11 HyperStudio Knowledge Adventure $69.95 Windows/Mac
12 InfoChannel Designer Scala $359 Windows
13 iShell 3 Tribeworks $495 Windows/Mac
14 Liquid Media SkunkLabs $140-$200 (academic) Windows
15 Magenta II Magenta $149 Windows
16 MaxMedia ML Software $50, Windows
17 Media Make&Go Sanarif €399 Windows
18 Media Mixer CD-Rom Studio $75 Windows
19 MediaPro MediaPro $99 Windows
20 Mediator 7 Pro Matchware $399 Windows
21 MetaCard MetaCard Co. $995 Windows/Mac/
UNIX
22 Motion Studio 3 Wisdom Software $39.95 Windows
23 MovieWorks Deluxe Interactive Solutions $99.95 Windows/Mac
24 MP Express Bytes of Learning $49.95 Windows/Mac
25 Multimedia Builder Media Chance $60 Windows
26 Multimedia Fusion ClickTeam $99 Windows
27 Multimedia Scrapbook Alchemedia, Inc. $89 Windows
28 MultimediaSuite $649 Windows
29 Navarasa Multimedia 4 Navarasa Multimedia $29.99 Windows
30 NeoBook NeoSoft Co. $199.95 Windows
31 ODS Players Optical Data Systems $229 Windows
32 Opus Pro Digital Workshop $249.95 Windows

Hypertext

Hypertext is a way of organizing material that attempts to overcome the inherent limitations of traditional text and in particular its linearity.
“Hypertext is the presentation of information as a linked network of nodes which readers are free to navigate in a non-linear fashion.

Hypertext Terms

This is a glossary of terms used within the WWW. In most cases, their use corresponds to conventional use in hypertext circles.
Anchor
An area to fix a graphical object so that its position relative to some other object remains the same during repagination. Frequently, for example, you may want to anchor a picture next to a piece of text so that they always appear together.
Annotation
A comment attached to a particular section of a document. Many computer applications enable you to enter annotations on text documents, spreadsheets, presentations, and other objects. This is a particularly effective way to use computers in a workgroup environment to edit and review work. The creator of a document sends it to reviewers who then mark it up electronically with annotations and return it. The document's creator then reads the annotations and adjusts the document appropriately.
Authoring
A term for the process of writing a document. “Authoring” seems to have come into use in order to emphasise that document production involved more than just writing.
Back Link
A link in one direction implied from the existence of an explicit link in the reverse direction.
Browser
A application which allows a person to read hypertext. The browser gives some means to viewing the contents of nodes, and of navigation from one node to another.
Button
It performs a special task when it is being pressed by the user. It is a trigger for the action.
Card
An alternative term for a node in a system (e.g. HyperCard, Notercards) in which the node sizze is limited to a single page of a limited size.
Client
A program which sends request services to the server.
Cyberspace
This is the “electronic” world as perceived on a computer screen; the term is often used in opposition to the “real” world.
Database
It is a collection of the data in a well manage manner, through the user can find the information.
Daemon
A program which runs independently of , for example the browser. Under UNIX “daemon” is used for “Server”.
Document
A document (noun) is a bounded physical representation of a body of information designed with the capacity (and usually intent) to communicate.
Domain
A group of computers and devices on a network that are administered as a unit with common rules and procedures. Within the Internet, domains are defined by the IP address. All devices sharing a common part of the IP address are said to be in the same domain.
External
A link to anode in a different database.
Host
A computer system that is accessed by a user working at a remote location. Typically, the term is used when there are two computer systems connected by modems and telephone lines. The system that contains the data is called the host, while the computer at which the user sits is called the remote terminal.
Hypermedia
An extension to hypertext that supports linking graphics, sound, and video elements in addition to text elements. The World Wide Web is a partial hypermedia system since is supports graphical hyperlinks and links to sound and video files. New hypermedia systems under development will allow objects in computer videos to be hyperlinked.
Index
A list of keys (or keywords), each of which identifies a unique record. Indices make it faster to find specific records and to sort records by the index field -- that is, the field used to identify each record.
Internal
A link to a node in a same databse.
Link
In hypertext systems, such as the World Wide Web, a link is a reference to another document. Such links are sometimes called hot links because they take you to other document when you click on them.
Navigation
A type of text-based Web site navigation that breaks the site into links of categories and sub-categories allowing major categories of information to be linked in a range of sequential order. Breadcrumb navigation is displayed to the user, so they can easily see exactly where that Web page is located within the Web site. While many types of Web sites use a breadcrumb navigation, it is becoming increasingly common for electronic commerce Web sites to display categories of products in this way.
Node
A unit of information

Graphics

Computer displays are made up from grids of small rectangular cells called pixels. The picture is built up from these cells. The smaller and closer the cells are together, the better the quality of the image, but the bigger the file needed to store the data. If the number of pixels is kept constant, the size of each pixel will grow and the image becomes grainy (pixilated) when magnified, as the resolution of the eye enables it to pick out individual pixels.
Vector graphics is the use of geometrical primitives such as points, lines, curves, and shapes or polygon(s), which are all based on mathematical equations, to represent images in computer graphics.
Vector graphics files store the lines, shapes and colors that make up an image as mathematical formulae. A vector graphics program uses these mathematical formulae to construct the screen image, building the best quality image possible, given the screen resolution. The mathematical formulae determine where the dots that make up the image should be placed for the best results when displaying the image. Since these formulae can produce an image scalable to any size and detail, the quality of the image is only determined by the resolution of the display, and the file size of vector data generating the image stays the same. Printing the image to paper will usually give a sharper, higher resolution output than printing it to the screen but can use exactly the same vector data file.

Search This Blog