Thursday, April 06, 2006
Fifth Generation of Input Device
Voice recognition, or speech recognition, is the computer's capability to distinguish spoken words.
Voice recognition programs do not actually understand speech; they only recognize a vocabulary of preprogrammed words. Voice recognition programs are either speaker dependent or speaker independent.
With speaker-dependent software, the computer makes a profile of the speaker's voice.
For speaker-independent software, it has a built-in set of word patterns so that 'training' is not necessary for recognizing the speaker's voice.
Besides, voice recognition programs are either discrete or continuous.
Discrete speech means that the speaker has to speak slowly and separate each word with a short pause.
Continuous speech means that the speaker may speak in a flowing conversational tone.
Advantages of voice recognition include
1. No typing of data is necessary.
2. The system can be used remotely by telephone or by people whose hands are occupied or disabled.
3. Voice recognition systems are also ideal for blind or visually impaired users.
Disadvantages of voice recognition include
1. Error rate is still high at the moment.
2. Recognition of words is slow.
3. Words sound the same (e.g., see and sea) cannot be distinguished.
4. The system is not suitable for use in noisy places.
5. The software must be trained to recognize specialist or technical words.
6. Many people find it difficult to speak in a 'writing' style.
Thursday, March 23, 2006
Thursday, March 09, 2006
The Three Kinds of Mouse
A later variation on the mouse, invented in the early 1970's byBill English at Xerox PARC, replaced the external wheels with a single ball that could rotate in any direction. The ball's motion, in turn, was detected using perpendicular wheels housed inside the mouse's body. This variant of the mouse resembled an inverted trackball and was the predominant form used with personal computers throughout the 1980's and 10990's. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size QWERTY keyboard and grabbing the mouse as needed.
Whereas a mechanical mouse uses a ball and wheels to detect the movement of the mouse, an optical mouse uses a light emitting diode and photodiodes to detect the movement of the mouse pad.
Early optical mice, such as those invented by Steve Kirsch of Mouse System Corporation, could be used only on a special metallic-surface (mouse pad) printed with a grid of fine blue and grey lines. As computing power grew cheaper, it became possible to embed more powerful special-purpose image processing chips in the mouse. This advance enabled the mouse to detect the relative motion of the mouse on a wide variety of surfaces, in turn translating the movement of the mouse over the surface into the movement of the pointer, eliminating the need for a special mouse pad. This advance paved the way for widespread adoption of optical mice.
In 2004, Logitech, along with Agilent Technologies, introduced the laser mouse with its MX 1000 model. This mouse uses a small laser instead of an LED. The new technology can increase the detail of the image taken by the mouse. The companies claim that this leads to a 20x increase in the sensitivity to the surface features used for navigation compared to conventional optical mice. Gamers have complained that the MX 1000 does not respond immediately to movement after it is picked up, moved, and then put down on the mouse pad. Newer revisions of the mouse do not seem to suffer from this problem, which is a power-saving feature. (Almost all optical mice, laser or LED based, also implement this power-saving feature, except those intended for use in gaming, where a millisecond of delay is significant.) Since it is a wireless mouse, the engineers designed it to save as much power as possible. In order to do this, the mouse blinks the laser when in standby mode (8 seconds after the last motion). This function also increases the laser life.
First lets discuss the mouse, a mouse is a handheld pointing device for computers, being a small object fitted with one or more buttons and shaped to sit naturally under the hand. The underside of the mouse houses a device that detects the mouse's motion relative to the flat surface on which it moves. The mouse's 2D motion is typically translated into the motion of a pointer on the display.
It is called a mouse primarily because the cord on early models resembled the rodent's tail, and also because the motion of the pointer on the screen can be mouse-like.
The mouse was invented by Douglas Engelbart of Stanford Institute Researh in a1963 after extensive usability testing. It was also called the bug, but eventually this was dropped in favor of mouse. It was one of several experimental pointing devices developed for Engelbart's oN-Line System, which was both a hardware and software system. The other devices were designed to exploit other body movements—for example, head-mounted devices attached to the chin or nose—but ultimately, the mouse won out because of its simplicity and convenience.
The first mouse was bulky and used two gear wheels perpendicular to each other: the rotation of each wheel was translated into motion along one axis in the plane. Engelbart received patent US3541541 on November 17, 1970 for an "X-Y Position Indicator For A Display System". At the time, Engelbart intended that users would hold the mouse continuously in one hand and type on a five-key chord keyset with the other.
The QWERTY Keyboard
The arrangement of characters on a QWERTY keyboard was designed in 1868 by Christopher Sholes, the inventor of the typewriter. According to popular myth, Sholes arranged the keys in their odd fashion to prevent jamming on mechanical typewriters by separating commonly used letter combinations. However, there is no evidence to support this assertion, except that the arrangement does, in fact, inhibit fast typing.
With the emergence of ball-head electric typewriters and computer keyboards, on which jamming is not an issue, new keyboards designed for speed typing have been invented. The best-known is called aDvorak keyboard. Despite their more rational designs, these new keyboards have not received wide acceptance.
The name "QWERTY" for our typewriter keyboard comes from the first six letters in the top alphabet row (the one just below the numbers). It is also called the "Universal" keyboard for rather obvious reasons. It was the work of inventor C. L. Sholes, who put together the prototypes of the first commercial typewriter in a Milwaukee machine shop back in the 1860's.
The first typewriter had its letters on the end of rods called "typebars." The typebars hung in a circle. The roller which held the paper sat over this circle, and when a key was pressed, a typebar would swing up to hit the paper from underneath. If two typebars were near each other in the circle, they would tend to clash into each other when typed in succession. So, Sholes figured he had to take the most common letter pairs such as "TH" and make sure their typebars hung at safe distances.
With the Dvorak keyboard, a typist can type about 400 of the English language's most common words without ever leaving the home row. The comparable figure on QWERTY is 100. The home row letters on Dvorak do a total of 70% of the work. On QWERTY they do only 32%..The Dvorak keyboard sounds very good. However, a keyboard need to do more than just "sound" good, and unfortunately, Dvorak has failed to prove itself superior to QWERTY. It appears that many of the studies used to test the effectiveness of Dvorak were flawed. Many were conducted by the good professor himself, creating a conflict of interest question, since he had a financial interest in the venture. A U.S. General Services Administration study of 1953 appears to have been more objective. It found that it really didn't matter what keyboard you used. Good typists type fast, bad typists don't..It's not surprising, then, that Dvorak has failed to take hold. No one wants to take the time and trouble to learn a new keyboard, especially if it isn't convincingly superior to the old. A few computer programs and special-order daisy wheels are available to transform modern typewriters or word processors to the Dvorak keyboard, but the demand for these products is small. After all, expert typists can can do nearly 100 words a minute with QWERTY . Word processors increase that speed significantly. The gains that Dvorak claims to offer aren't really needed.
The Dvorak Keyboard
It has been estimated that in an average eight-hour day, a typist's hands travel 16 miles on a QWERTY keyboard, but only 1 mile on a Dvorak keyboard.
In addition to the standard Dvorak keyboard, there are two additional Dvorak keyboards, a left-handed and right-handed keyboard. These keyboards are designed for people who have only one hand for typing.
The Dvorak keyboard, named for its inventor, Dr. August Dvorak, was designed with the goal of maximizing typing efficiency. For over a century, typists have been using the qwerty keyboard arrangement, a hack that was implemented to work around the mechanical limitations of early typewriters.
Contrary to popular opinion, the qwerty design was not actually invented to slow typists down. Rather, the layout was intended to place common two-letter combinations on opposite sides of the keyboard. On manual typewriters, each key is mechanically connected to a lever that has the reversed image of a letter on it. If a typist were to hit two keys on the same side of the keyboard in rapid succession, the second lever on its way up would hit the first on its way down, the keys would become stuck together, and the typist would have to stop typing and unstick the keys. The qwerty layout was a clever design that minimized this problem. However, now that most of us use computers (or electric typewriters that don't use levers), the problem of keys jamming is no longer a consideration. Also, computers now enable us to switch layouts while continuing to use the same equipment.
Most people learn to type on a qwerty keyboard. New typists learn the qwerty arrangement because that's most likely what they'll encounter on the existing equipment they'll be using; new equipment is standardized to the qwerty arrangement because that's what the vast majority of us know. Most people are reluctant to switch because they're afraid of how long it will take them to learn the new arrangement, and of the additional effort of having to switch layouts on all of the equipment they might encounter.
Fourth Generation of Input Device
Keyboard is the set of typewriter-like keys that enables you to enter data into a computer. Computer keyboards are similar to electric-typewriter keyboards but contain additional keys. The keys on computer keyboards are often classified as follows:
alphanumeric keys -- letters and numbers
punctuation keys -- comma, period, semicolon, and so on.
special keys -- function keys, control keys, arrow keys, caps lock keys, and so on.
There are three kinds of keyboard they are QWERTY keyboard, Dvorak Keyboard, and Ergonomic Keyboard.
Thursday, March 02, 2006
The Optical Disk
A plastic-coated disk that stores digital data, such as music or text, as tiny pits etched into the surface and is read with a laser scanning the surface. Also called laser disk. A direct access disk written and read by light. CD, CD-ROM, DVD-ROM and DVD-Video are read-only optical disks that are recorded at the time of manufacture and cannot be erased. CD-R, DVD-R, WORM and magneto-optic (in WORM mode) disks are write-once. They are recorded in the user's environment, but cannot be erased. CR-RW, DVD-RAM, DVD-RW and MO disks are rewritable.
Rewritable disks use either magneto-optic (MO) or phase change technology. Used in libraries that hold multiple cartridges, magneto-optic (MO) disks are extremely robust. Phase change disks (CD-RW, DVD-RAM, etc.) are lower cost consumer-oriented products, and DVD-RAM is expected to become very popular. Optical disks have some advantages over magnetic disks. They have higher capacities as removable modules, and they are not subject to head crashes or corruption from stray magnetic fields. They also have a 30-year life and are less vulnerable to extremes of hot and cold.
Writability Optical Disk Types
Read only CD, CD-ROM, DVD-ROM, DVD-Video
Write once CR-R, DVD-R,
WORM Rewrite CD-RW, DVD-RAM, DVD-RW, MO, DataPlay
The Third Generation of Input Device
We are already finished with the tape or the magnetiv tape input device. Now we are discuss about the disk. There are 2 kinds of disk one is the optical disk and another one is the magnetic disk.
Now lets tackle first the magnetic disk.
A memory device, such as a floppy disk, a hard disk, or a removable cartridge, that is covered with a magnetic coating on which digital information is stored in the form of microscopically small, magnetized needles. The primary computer storage device. Like tape, it is magnetically recorded and can be re-recorded over and over. Disks are rotating platters with a mechanical arm that moves a read/write head between the outer and inner edges of the platter's surface. It can take as long as one second to find a location on a floppy disk to as little as a couple of milliseconds on a fast hard disk. (e.g. Hard Disk). The disk surface is divided into concentric tracks (circles within circles). The thinner the tracks, the more storage. The data bits are recorded as tiny magnetic spots on the tracks. The smaller the spot, the more bits per inch and the greater the storage. Tracks are further divided into sectors, which hold a block of data that is read or written at one time; for example, READ SECTOR 782, WRITE SECTOR 5448. In order to update the disk, one or more sectors are read into the computer, changed and written back to disk. The operating system figures out how to fit data into these fixed spaces.
Wednesday, March 01, 2006
The Third Generation of Input Device
Tape has been more economical than disks for archival data, but that is changing as disk capacities have increased enormously. If tapes are stored for the duration, they must be periodically recopied or the tightly coiled magnetic surfaces may contaminate each other.
Sequential MediumThe major drawback of tape is its sequential format. Locating a specific record requires reading every record in front of it or searching for markers that identify predefined partitions. Although most tapes are used for archiving rather than routine updating, some drives allow rewriting in place if the byte count does not change. Otherwise, updating requires copying files from the original tape to a blank tape (scratch tape) and adding the new data in between.
Track FormatsTracks run parallel to the edge of the tape (linear recording) or diagonally (helical scan). A linear variation is serpentine recording, in which the tracks "snake" back and forth from the end of the tape to the beginning. Legacy open reel tapes used nine linear tracks (8 bits plus parity), while modern cartridges use 128 or more tracks. Data are recorded in blocks of contiguous bytes, separated by a space called an "interrecord gap" or "interblock gap." Tape drive speed is measured in inches per second (ips). Over the years, storage density has increased from 200 to 38,000 bpi.
Magnetic tape was first used to record computer data in 1951 on the Mauchly-Eckert UNIVAC 1. The recording medium was a 1/2 inch (13 mm) wide thin band of nickel-plated bronze. Recording density was 128 characters per inch (198 micrometre/character) on eight tracks at a linear speed of 100 in/s (2.54 m/s), yielding a data rate of 12,800 characters per second. Making allowance for the empty space between tape blocks, the actual transfer rate was around 7,200 characters per second.
In a typical format, data is written to tape in blocks with inter-block gaps between them, and each block is written in a single operation with the tape running continuously during the write. However, since the rate at which data is written or read to the tape drive is not deterministic, a tape drive usually has to cope with a difference between the rate at which data goes on and off the tape and the rate at which data is supplied or demanded by its host. Various methods have been used alone and in combination to cope with this difference. A large memory buffer can be used to queue the data. The tape drive can be stopped, backed up, and restarted. The host can assist this process by choosing appropriate block sizes to send to the tape drive. There is a complex tradeoff between block size, the size of the data buffer in the record/playback deck, the percentage of tape lost on inter-block gaps, and read/write throughput.
Tape has quite a long data latency for random accesses since the deck must wind an average of 1/3 the tape length to move from one arbitrary data block to another. Most tape systems attempt to alleviate the intrinsic long latency, either using indexing, where a separate lookup table is maintained which gives the physical tape location for a given data block number, or by marking blocks with a tape mark that can be detected while winding the tape at high speed. Most tape drives now include some kind ofdata compression. There are several algorithms which provide similar results: LZ (Most), IDRC (Exabyte), ALDC (IBM, QIC) and DLZ1 (DLT). The actual compression algorithms used are not the most effective known today, and better results can usually be obtained by turning off the compression built into the device and using a software compression program instead. Software compression also allows encryption to be performed after compression. (Once data has been encrypted, compression algorithms are no longer effective.) However, software compression can place high loads on processors. Future tape drive will likely incorporate hardware encryption after the compression.
Tape remains a viable alternative to disk due to its higher bit density and lower cost per bit. Tape has historically offered enough advantage in these two areas above disk storage to make it a viable product, particularly for back up. The rapid improvement in disk storage density and price coupled with arguably less-vigorous innovation in tape storage, has reduced the market share of tape storage products.
Sample of a Punch Card
The First and Second Generation of Input Device
A punch card is a medium for feeding data into a computer, essentially a card punched with holes or notches to represent letters and numbers or with a pattern of holes to represent related data. Also called Hollerith card.
From 1890 until the 1970s, punch cards were synonymous with data processing. The concepts were simple: the database was the file cabinet; a record was a card. Processing was performed on separate machines called "sorters," "collators," "reproducers," "calculators" and "accounting machines." Today, the punch card is all but obsolete except for voting systems in some states. Of course, the presidential election of 2000 brought punch cards into infamy and made the U.S. the brunt of jokes worldwide for using such antiquated and error-prone systems.
The signature medium of computing's Stone Age, now obsolescent. The punched card actually predated computers considerably, originating in 1801 as a control device for mechanical looms. The version patented by Hollerith and used with mechanical tabulating machines in the 1890 U.S. Census was a piece of cardboard about 90 mm by 215 mm. There is a widespread myth that it was designed to fit in the currency trays used for that era's larger dollar bills, but recent investigations have falsified this.
IBM (which originated as a tabulating-machine manufacturer) married the punched card to computers, encoding binary information as patterns of small rectangular holes; one character per column, 80 columns per card. Other coding schemes, sizes of card, and hole shapes were tried at various times.
The 80-column width of most character terminals is a legacy of the IBM punched card; so is the size of the quick-reference cards distributed with many varieties of computers even today.
The method is quite simple: On a piece of light-weight cardboard, successive positions either have a hole punched through them or are left intact. The rectangular bits of paper punched out are called chads. Thus, each punch location on the card represents a single binary digit (or "bit"). Each column on the card contained several punch positions (multiple bits).
IBM punch card format
The IBM card format, which became standard, held 80 columns of 12 punch locations each, representing 80 characters. Originally only numeric information was coded with 1 or 2 punchs per column: digits (digit[0-9]) and signs (zone[12,11] – sometimes overpunching the Least Signifacant Digit). Later, codes were introduced for upper-case letters and special characters. A column with 2 punches (zone[12,11,0] + digit[1-9]) was a letter; 3 punches (zone[12,11,0] + digit[2-4] + 8) was a special character. The introduction of EBCDIC in 1964 allowed columns with as many as 6 punches (zones[12,11,0,8,9] + digit[1-7]). The punch cards were 7 and 3/8 inches long by 3 and 1/4 inches high and were 0.007 inch thick with one of the upper corners cut at an angle.
In its earliest uses, the punch card was not just a data recording medium, but a controlling element of the data processing operation. Electrical pulses produced when the read brushes passed through holes punched in the cards directly triggered electro-mechanical counters, relays, and solenoids. Cards were inexpensive and provided a permanent record of each transaction. Large organizations had warehouses filled with punch card records. One reason punch cards persisted into the early computer age was that an expensive computer was not required to encode information onto the cards. When the time came to transfer punch card information into the computer, the process could occur at very high speed, either by the computer itself or by a separate, smaller computer (e.g. an IBM 1401) that read the cards and wrote the data onto magnetic tapes or, later, on removable hard disks, that could then be mounted on the larger computer, thus making best use of expensive mainframe computer time.
Punched-card systems fell out of favor in the mid to late 1970s, as disk storage became cost effective, and affordable interactice terminals meant that users could edit their work with the computer directly rather than requiring the intermediate step of the punched cards. However, their influence lives on through many standard conventions and file formats. The terminals that replaced the punched cards displayed 80 columns of text, for compatibility with existing software. Many programs still operate on the convention of 80 text columns, although strict adherence to that is fading as newer systems employ graphical user interface with variable-width type fonts.