| Computers | Storage Media | Peripherals | Connectivity | Miscellaneous |
IBM PC by You Tan
The IBM PC (8086) System was the original IBM microcomputer. It had two 160 kilobyte 5.25 inch single-sided double-density floppy disk drives and 640 k NEC 15 Display Monitor.
The 8086 was a 16-bit microprocessor chip designed by Intel in 1978. It was a true 16-bit processor and talked with its cards via a 16 wire data connection. The chip contained 29,000 transistors and it was a 16 bit processor with a 16 bit bus and was able to address 1 MB of RAM through its 20 bit address bus. The 8086 was based on a previous chipset design called the 8080 and 8085. However, the advantage that the 8086 had was that it was fully 16 bit compatible which the 8080/8085 was not. Although the 8086 was a great chip, it was expensive. To overcome this, Intel released the 8088 in 1979 which was identical to the 8086 except the external data bus was reduced to 8-bits which allowed cheaper 8 bit chipsets that had already been developed for previous chips. Therefore, the 8088 was selected for IBM's first PC which went on to establish the standard for PC CPUs through the x86 instructions that the 8086 introduced.
The original model of IBM PC (Personal Computer) was designated the IBM 5150. It was released in September, 1981. The original PC had a version of Microsoft BASIC. It had CGA (Color Graphics Adapter) video card, cassette tape, a floppy disk drive which was an optional extras but no hard disk was available. It had only five expansion slots; maximum memory was 256 Kb, 64 Kb on the main board and three 64 kB expansion cards. The processor was an Intel 8088 running at 4.77 MHz (The 8086 processor was not as the processor used in the original IBM PC). IBM sold it in configurations with 16kB or 64 KB of RAM. And it was business that purchased the PC; and it was generally well educated middle managers that saw the potential of the VisiCalc spreadsheet. Reassured by the IBM name, they began buying the machines on their own budgets to help do the calculations they had learned at business school.
IBM Personal Computer XT by Josua Simanungkalit
IBM introduced the PC, or Personal Computer, on August 12, 1981. At the time of its introduction, most computers were still processing 8 bits of information per clock cycle. IBM revolutionized the industry by hitting the market with a computer based on the Intel 8088 processor, which was compatible with 8-bit computers but processed information internally at 16-bits per clock cycle. The PC featured an expandable design, known as "open architecture", which made it possible for users to add features to their machines without replacing the whole computer.
The original IBM Personal Computer (PC) (Model 5160) came standard with 16k of RAM (9x16KBit chips, soldered), expandable to 64k of RAM on the system board (3 more banks of sockets) and 540k total (not 640, due to a hardware bug), a green-on-black slow-phosphor TTL Monochrome Monitor which plugged into the AC port on the a 63.5 Watt black power supply a Monochrome Graphics Adapter
On March 8, 1983, IBM released the Personal Computer XT. The XT stands for EXtended Technology. It is basically an improvement over the original PC but not really that much different in that someone could upgrade a PC to XT specifications with a few changes on the motherboard. As standard equipment came the tried and true 8088/4.77mhz processor from the PC, 128k RAM , 5.25" Floppy Drive (Full-Height), and a large (for the time) 10-mb Full-Height Hard Disk Drive. It was one of the first computers to come standard with a hard drive.
Type Personal Computer
Released March 8, 1983
Processor Intel 8088 @ 4.77 MHz
Memory 64KB ~ 640 KB
OS IBM BASIC / MS DOS 2.0
Drive 5.25" Full-Height Floppy Drive
Original Price $7,545.00 (according to IBM price list)
The Intel 8088 is an Intel microprocessor based on the 8086, with 16-bit registers and an 8-bit external data bus. The processor was used in the original IBM PC. The 8088 was targeted at economical systems by allowing the use of 8-bit designs. Large bus width circuit boards were still fairly expensive when it was released. The most influential microcomputer to use the 8088 was, by far, the IBM PC. The original PC processor ran at a clock frequency of 4.77 MHz. A factor for using the 8-bit Intel 8088 version was that it could use existing Intel 8085-type components.
An Intel 8088 microprocessor: The operating system usually sold with it was PC-DOS 2.0. The eight slots were an increase over the five in the IBM PC, although three were taken up by the floppy drive adapter, the hard drive adapter, and the serial card. Later models came with 256 KB of memory standard, and eventually models with 640 KB and a 20 MB hard drive were sold.
The XT originally came only in a standard configuration with the hard disk. It was not until 1985 that a model without the hard drive became available. Other models came with two half-height floppy drives as well as the hard drive. Like the original PC, the XT came with a BASIC interpreter in ROM. Since this interpreter was meant to be used with a cassette drive (which wasn't offered on the XT), the only ways to access it were by disconnecting the hard drive and leaving the floppy drive empty, using the BASICA program, included on a floppy disk, which added extensions for using the disk drives, or invoking a BIOS call manually using a debugger.
There is also 2 versions of the XT Motherboard, both released at different times. The Original was a 64/256k Max RAM on board whereas an updated Motherboard with support for the 101-key keyboard, 3.5" Floppy Drive and a few other modern niceties were included on the 256k/640k RAM motherboard released on 1986 BIOS date XT's.
Since the introduction of the XT, there has been an explosion in the PC industry. This was largely made possible by the open architecture of the IBM PC and XT. Many 286, 386, and 486 computers have been built with the same slot-width as the IBM XT, resulting in the term ISA, or Industry Standard Architecture.
The 5160 was replaced with the PC XT S (20 MB Hard disk, slim size floppy disk unit, 640 KB RAM), then with the PC XT 286. PC and XT keyboards are not compatible with more modern PCs (IBM AT or newer), even with DIN to PS/2 mini-DIN plug adapters, because PC/XT keyboards use different keyboard scan codes. An XT to AT signal adapter is needed to create compatibility with modern computers.
Source : Wikipedia, the free encyclopedia
Classic Commodore 64 by Ali Farooqi
The year was 1982, and the revolution in home computing had arrived. It was finally here: a machine with superior memory, sound and graphics capabilities than any home computing predecessor before it. These attributes, coupled with its phenomenal price, made the Commodore 64 the best selling computer model of all time.
The 64 in the name "Commodore 64" represented the amount of memory the computer had, 64k RAM (Random Access Memory). At the time of the introduction of the "C64" or Commodore 64, its main competitors had a standard memory offering of 16k of RAM for about $1,500 US Dollars. The initial price for a C64 was $595 US Dollars, making it three times more powerful at about a third of the price!
As mentioned, this computer was marketed for the home computing industry, and was not used in the mainstream business environment. However, there were many office applications that were developed and available to be utilized on this personal computer.
During its debut at the 1982 Consumer's Electronics Show in Las Vegas, The Commodore 64 quickly gained an early reputation as being a gaming enthusiasts' computer. With its ability to be plugged straight into a television set much like its gaming-only competitors, the gaming market was "crashed" with this new PC arrival, since the C64 did not require a specialized computer monitor.
Sold on its strong marketing, technological offerings, and forward thinking sales strategies, this personal computer became known as one of the most powerful personal computers on the market for the money.
- http://en.wikipedia.org/wiki/Commodore_64 - Commodore 64
- http://www.skillreactor.org/cgi-bin/index.pl?c64 The History of The Commodore 64 Series
Macintosh Plus by Liu Zhang
Macintosh Plus, released on January 10, 1986. It was a great success and remained in production until October 15, 1990. It was on sale for over four years and ten month, it was the longest-lived Mac in Apple's history.
The Macintosh or Mac is a line of personal computers designed, developed, manufactured and marketed by Apple Computer. The original Macintosh was the first commercial personal computer to use a graphical user interface (GUI) and mouse instead of the ten-standard command line interface. But the original Mac had clear limitations:
- 1. It had very little memory, even compared with other personal computers in 1984, and could not be expanded easily;
- 2. It lacked a hard drive and not easy to attach one.
Apple realized that Mac needed improvement in these areas. Therefore, they introduced Macintosh Plus. Mac Plus had several benefit over the original Mac:
- 1. The extended ROM held the new version of Mac OS.
- 2. It had enhanced graphic libraries.
- 3. It offered 1 MB of RAM, expandable to 4 MB.
- 4. It had revolutionary SCSI parallel interface, allowing up to seven peripherals - such as hard drives and scanners - to be attached to the machine. The SCSI bus on the Mac plus is officially rated at 1.25MBps by Apple, although real world testing showed it to be slightly over 2.1MBps - which is still 4x the speed of Apple's earlier serial port hard drive.
- 5. It also provided AppleTalk networking and the new file manager.
- 6. The new floppy-disk unit could use double-sided 800 KB disks
Also, the Mac Plus made a reasonable web server, as the Macintosh Plus Web Server demonstrated until it was retired in October 2001.
Until the air cooled iMac came out, all desktop Macs following the Plus up until "the slot-loading iMacs" have included a fan to reduce internal heat. Because the Mac Plus is convection cooled, you should never block the vents on the side or top of the computer. However, it may also have some problems with the power supply when it was over heated after several years of use.
With other issues remained, particularly the low processor speed and limited graphics ability, Mac Plus was discontinued in October, 1990. Updated Motorola CPUs made faster machine available. Macintosh Plus was replaced by Macintosh II, Macintosh IIX, SE, and so on.
Compaq Portable III by Artid Suwanvaraboon
In November 1982, Compaq Portables were introduced to American people who wanted another way to run their business. They bought the personal computer because it can easily be carried from place to place, and it can run any business even though they are not in the office. However, Compaq Company did not release them yet. They used that time to collect feed back from the comments of their products.
One year later, in March 1983, Compaq released their portables to the market with a high price of is $3590. It also provided a smaller portable than the other competitor companies. Compaq Portable used an Intel 8088, 4.77 MHz which is the faster processor than the IBM Company. Compaq also used 128K, and 640K max for their portable RAM, with a 9" monochrome monitor built-in with size 80 x 25 texts, and color graphic card. The 12.5 kg portables of Compaq were made up into a luggable case the size of a portable sewing machine. These portables had basically the same hardware as an IBM PC with Compaq's custom BIOS instead of IBM's. They use an MS-DOS operating system and BASIC licensed from Microsoft.
Although most people know that Compaq portable had many function and software abilities resemble the IBM PC, it is actually quite different in several respects. The Compaq's keys don't have the hardwired click but they have a softer touch instead. The other advantage was that we can choose our own level of audible feedback for keystrokes by simultaneously pressing the ALT key and the + or - keys to raise or lower the volume from no click to a loud one.
Finally, the benefits and capabilities that Compaq provided over its competitor company are design, compatibility, portability, and price. However the most significant between the two is price. For example, all of the options of the Compaq are less expensive than the IBM PC such as a 64 K-byte memory board costs $195 for the Compaq versus $350 for the IBM. However, the only thing that is possible to act as an obstacle to Compaq faces is the IBM itself because IBM has a longstanding reputation for designing hardware and software. To solve this problem Compaq tried to come up with introducing a comparatively low-cost and portable alternative to their competitor.
Storage Media for Mainframes and Minicomputers by Gadde Lalitha
Mainframes (often colloquially referred to as Big Iron) are computers used mainly by government institutions and large companies for mission critical applications, typically bulk data processing such as census, industry/consumer statistics, ERP, and financial transaction processing.
The term originated during the early 1970s with the introduction of smaller, less complex computers such as the DEC PDP-8 and PDP-11 series, which became known as minicomputers or just minis. The industry/users then coined the term "mainframe" to describe larger, earlier types (previously known simply as "computers").
Mainframes are primarily used for batch processing and OLTP(online transaction processing) applications which are physically large, located centrally with the data in one place. They use shared I/O channel and to access multiple magnetic storage devices. The storage devices are very expensive and are very reliable. The storage device consists of an array of disks to store data in magnetic form. Mainframes use Tapes and Optical Disks are used as secondary storage devices for storing data offline or to backup data.
Mini computers are smaller version of mainframes that used at departmental level or used for a special purpose. Data is stored on hard disks attached to mini computer as the primary storage medium. Data is shared across the mini computers using network file shares when connected on Local Area Networks. Mini computers are not used for storing large amounts of data. The data on mini computers is not as reliable as the data stored on Mainframe computers.
Both Mainframes and Mini computers use RAM to store volatile data. A direct access storage device, or DASD (IPA ['dæzdi]) is a form of magnetic disk storage, historically used in the mainframe and mini computers.
Modern mainframe computers have abilities not so much defined by their performance capabilities as by their high-quality internal engineering and resulting proven reliability, high-quality technical support, top-notch security, and strict backward compatibility for older software. These machines can and do run successfully for years without interruption, with repairs taking place while they continue to run. Mainframe vendors offer such services as off-site redundancy - if a machine does break down, the vendor offers the option to run customers' applications on their own machines (often without users even noticing the change) whilst repairs go on.
The robustness and dependability of these systems has been one of the main reasons for the longevity of this class of computers, as they are used in applications where downtime would be catastrophic. The term Reliability, Availability and Serviceability, or RAS has become a marketing term used to denote this robustness. This robustness is often the argument used against replacing mainframes with other types of computers.
Mainframes often support thousands of simultaneous users who gain access through "dumb" terminals or terminal emulation minicomputer (mid-range) environments. A redundant array of independent disks (RAID) is a form of DASD
- The Essentials of Computer Organization and Architecture
- Storage Networks: The Complete Reference
Punched Tape by Rupali Girdher
Punched tape is an obsolete data storage solution used in early twentieth century to store data for mini computers and teleprinter machines. The whole idea centered on a long strip of paper which consisted of holes to store data and the holes were representative of the data stored in them. They were introduced to replace the old technology: Punch Cards which also stored data but were cumbersome to use and could store only limited data.
Punched Tapes were mainly used to stored instructions for Mini Computers to do the processing such as simple logic of adding two numbers. Data was represented by the presence or absence of a hole in a particular location. Originally punched tapes had five rows of holes for data but later additional capacity was added by introduction of 6, 7 and 8 rows. In addition to that, they were also used as a way of store messages for Tele printers to send telegrams. Since punched tape can store large amount of information, they served the perfect need for teleprinting communication. Operators can type the message on the punched tape and later on send the data at high speed from the tape. Not only it saved the operators to store data on multiple punch cards but also interchange rates for the businesses as tape reader can read faster from the punched tape, there by saving transmission cost for the businesses.
Two major drawbacks which punched tape suffered from:
- Reliability: Data validity was a major concern as data had to be compared against a manual data.
- Storage Problems: Paper Tapes can could get unusable over time by heavy usage and hence become unreadable.
Punched Cards by Anna Filonchuk
What is the punched card?
- Punched card is an early storage medium made of thin cardboard stock that holds data as patterns of punched holes. The card contains 12 rows of 80 columns, and each column is typically used to represent a single piece of data such as a character. The holes are punched by a keypunch machine or a card punch peripheral and are fed into the computer by a card reader.
Picture of the punched card from: www.fortunecity.com
Who invented the punched card?
- The standard punched card was originally invented by Herman Hollerith. It was first used for vital statistics tabulation by the New York City Board of Health and several states. After this trial use, punched cards were adopted for use in the 1890 census. His idea for using punched cards for data processing came after he'd seen the punched cards used to control Jacquard looms. Jacquard, working in France around 1810, originated the idea of using holes punched in cardstock to control the pattern a loom weaves.
In his turn Charles Babbage, who originated the idea of a programmable computer, adopted Jacquard's system of punched cards to control the sequence of computations in the design for his Analytical engine in 1837. Such cards were used as an input method for the primitive calculating machines of the late 19th century.
Why were the punched cards used?
- From 1890 until the 1970s, they were synonymous with data processing. Concepts were simple: the database was the file cabinet; a record was a card. So, punch cards were the primary medium for data entry, storage, and processing in institutional computing. Processing was performed on separate machines called sorters, collators, reproducers, calculators and accounting machines. These machines allowed sophisticated data-processing tasks to be accomplished long before modern (electronic) computers were invented. The card readers used an electrical (metal brush) or, later, optical sensor to detect which positions on the card contained a hole. They had high-speed mechanical feeders to process around one hundred cards per minute. All processing was done with electromechanical counters and relays.
What benefits and capabilities does this item provide?
- In its earliest uses, the punch card was not just a data-recording medium but a controlling element of the data-processing operation. Punched cards that held processing instructions were called control cards. Electrical pulses produced when the brushes passed through holes punched in the cards directly triggered electro-mechanical counters. Cards were inexpensive and provided a permanent record of each transaction. Large organizations had warehouses filled with punch-card records.
One reason punch cards persisted into the early computer age was that an expensive computer was not required to encode information onto the cards. When the time came to transfer punch-card information into the computer, the process could occur at very high speed, either by the computer itself or by a separate, smaller computer that read the cards and wrote the data onto magnetic tapes or, later, on removable hard disks, that could then be mounted on the larger computer, thus making best use of expensive mainframe computer time.
Do we use punched cards now?
- Punched-card systems fell out of favor in the mid to late 1970s, as disk storage became cost-effective, and affordable interactive terminals meant that users could edit their work with the computer directly rather than requiring the intermediate step of the punched cards.
However, their influence lives on through many standard conventions and file formats. The terminals that replaced the punched cards displayed 80 columns of text, for compatibility with existing software. Many programs still operate on the convention of 80 text columns, although strict adherence to that is fading as newer systems employ graphical user interfaces with variable-width type fonts. Today, the punch card is all but obsolete except for voting systems in some states.
- 1. www.en.wikipedia.org
- 2. www.cs.uiowa.edu
- 3. www.fortunecity.com
Magnetic Tape by Leah Schmidt
Magnetic tape is a storage medium that is still in use today. It consists of a plastic strip with a magnetic coating over it. The length and width of the strip has varied throughout history and depending on the function and device for which it records. Most types of recording tape are magnetic, whether they record audio, visual or computer data.
Various prototypes of magnetic storage computers began construction in the 1940's, only to be abandoned before use, often due to financial constraints. One such company was Electronic Control Company, which received a grant for the US government to build a computer with magnetic tape input and output, named UNIVAC. The owners formed EMCC after the dilution of Electronic Control, which was promptly purchased by Remington Rand, who financed the completion of UNIVAC.
UNIVAC was completed and tested in 1951, and in the same year sold to the US Census Bureau. It was the first commercial computer to feature magnetic tape storage. There were eight tape drives that stood separate from the main computer, each six foot high and three foot wide. Each drive used a half inch wide, 1200 feet long strand of nickel-plated bronze. Census data was fed into the computer, which weighed 29,000 pounds and took up an entire room. The UNIVAC used seven of the tape drives for processing, and the eighth to keep time.
IBM's computers in the 1950's quickly adopted the use of magnetic tape, and it became industry standard as a storage medium. Tapes retained their length, in the 1980's even growing to up to 3600 inches, albeit with a much smaller width.
LINCtape and DECtape were the second generation of magnetic tape storage, and were mainly used as personal storage devices. The evolution here was that these tapes were able to be written and rewritten as needed, replacing the expensive, one time use tape used prior. They were eventually replaced by diskettes, which had the advantage of a casing to protect the tape and greater speed.
While magnetic tape is an older technology, it is still in use today. Pick up any cassette or VHS tape and the same technology is used. The evolution of the technology today, however, includes a fixed plastic shell that holds the tape in place and prevents damage to the stored information; the original versions were stored on large reels that left the tape exposed.
Today, storage has become much more compact. Memory cards make it easy to store large amounts of data on something the size of a quarter. But it all started with a 1200 foot, half inch tape that was used to count the census in the 50's.
- "Magnetic tape." Wikipedia, The Free Encyclopedia. 27 Aug 2006, 00:22 UTC. Wikimedia Foundation, Inc. 29 Aug 2006 (http://en.wikipedia.org/w/index.php?title=Magnetic_tape&oldid=72097852).
- "The Evolution of the Computer." 28 Aug 2006 http://history.acusd.edu/gen/recording/computer1.html
Floppy Disk 3.5 inch by Nutcha Sinthuchat
Last Wednesday I visited Grace's Place, where there is an exhibit about computers and information technology. There are a lot of interesting things in this museum. When I looked around, I saw the 3.5 -inch floppy disk that I used in high school. So I decide to do research about the floppy disks to learn more about it.
A floppy disk is a data storage device that is composed of a disk of thin, flexible magnetic storage medium encased in a square or rectangular plastic shell. Floppy disks are read and written by a floppy disk drive (FDD). In the past, the 3.5 -inch floppy disk has been introduced by Sony in 1983. The first computer to use this format was the HP-150. The 3.5-inch floppy disk used in business to distribute software, transfer data between computers, and create small back up. Also, it was often used to store a computer's operating system (OS), application software, and other data.
The formatted capacity of 3.5-inch floppy disk was originally 1,474,560 bytes. In the binary prefix numbering system, this is equivalent to 1440 kibibytes, or 1.41 MiB. In decimal, it is 1.47 MB. The computer storage manufacturers frequently use the decimal system when marketing to consumers or advertising to the public because the result looks larger.
The 3.5-inch floppy disk has advantage when compared to the 5.25 inch floppy disk. First, the three densities of 3.5-inch floppy disk are partially compatible. Higher density drives are built to read, write and format lower density media without problems, and also provide the correct media is used for the density selected. Second, in the past 3.5- inch floppy disk have had a good design because a small circle of "floppy" magnetic material encased in hard plastic. Earlier types of the floppy did not have this plastic case, which protects the magnetic material from abuse and damage. A sliding metal cover protects the delicate magnetic surface when the diskette is not in use and automatically opens when the diskette is inserted into the computer. Finally, the diskette has a rectangular shape, so the user can only put it in one way. By the end of 1980s, the 5.25-inch disks had been superseded by the 3.5-inch disks and in the mid-1990s the drives had virtually disappeared as the 3.5-inch disks became the preeminent floppy disk.
Compact Disc (CD) by Nutchapol Tanchareon
After visiting the Grace's Place in Computer Center Building (CCB) yesterday, the topic that I was very interested in was the compact disc (CD). Today, people use the term "CD" whenever they want something to store their work files. They all know that CDs can be used for data storage. However, not many people know that what CD stands for. Also, at the beginning, CDs were not used for keeping data. Therefore, this research will explain what a compact disc is. What is it used for? and also what benefits does it provide?
A compact Disc or CD is an optical disc used to store digital data. Originally, its main purpose was developed for storing digital audio. The storage process of CDs begins at the center of the disc and then proceeds outwards to the edge, which allows the various size formats available. The standard CDs are available in two sizes. The most common one is 120 mm in diameter with an 80-minute audio capacity and 700 MB data storage. The second one is 80 mm in diameter, which is called a "mini CD" with 21-minutes of music and 184 MB of data.
The history of CDs started in the early 1970s. The researchers of Philips Company started the experiments on "audio-only optical discs." In 1979, Philips and Sony decided to join together in this project in order to design the new digital audio-disc. Then they produced something called "Red Book," which is the compact disc standard. On August 17, 1982, the PolyGram Company produced the world's first mass-produced audio CD containing classical music. This event was called the "Big Bang" of the digital audio revolution. In 1985, "Yellow Book" was developed by Sony and Philips in order to introduce new versions of the CD. It was CD-ROM standard which was optical data storage and readable by a computer with a CD-ROM drive.
From its origins, the CD has grown to encompass other applications and it also adapted itself for use as a data storage device, which is known as CD-ROM, CD-R or CD-RW. Moreover, its success shows that the CD and its extensions have flourished reaching the annual worldwide sales more than 30 billion discs each year.
Data-Tape Cartridge by Esha Christie
View Esha's entry on Wikipedia.
Optical Mouse by Chada Tichachol
An optical mouse is an advanced computer pointing device that uses a light-emitting diode, an optical sensor, and digital signal processing in place of the traditional mouse ball and electromechanical transducer. Movement is detected by sensing changes in reflected light, rather than by interpreting the motion of a rolling sphere. In 1980, the early optical mice were of two types. One, invented by Steve Kirsch of Mouse Systems Corporation, used an infrared LED and a four-quadrant infrared sensor to detect grid lines printed on a special metallic surface with infrared absorbing ink. Predictive algorithms in the CPU of the mouse calculated the speed and direction over the grid. Another one, invented by Richard F. Lyon and sold by Xerox, used a 16-pixel visible-light image sensor with integrated motion detection on the same chip and tracked the motion of light dots in a dark field of a printed paper or similar mouse pad. However, these two mouse types had very different behaviors, as the Kirsch mouse used an x-y coordinate system embedded in the pad, and would not work correctly when rotated, while the Lyon mouse used the x-y coordinate system of the mouse body, as mechanical mice do.
As computing power grew cheaper, it became possible to embed more powerful special-purpose image processing chips in the mouse. This advance enabled the mouse to detect relative motion on a wide variety of surfaces, translating the movement of the mouse into the movement of the pointer and eliminating the need for a special mouse pad. Therefore, there is a widespread adoption of optical mice.
Optical mouse works better than mechanical mouse which was an earlier mouse type. The main reason is that it requires no maintenance and cleaning as well as lasts longer because of no moving parts. Moreover, it does not normally require any maintenance other than removing debris that might collect under the light emitter, although cleaning a dirty mechanical mouse is fairly straightforward too. In addition, if the device is used with the proper surface, sensing is more precise than is possible with any pointing device using the old electromechanical design. This makes computer operation easier in general.
Palm Pilots by Brandon Hackmann
Personal digital assistant was the name of the first generation of palm pilots. They were first introduced in the mid 1970's but they were only advanced calculators. Then after a few years of advancement they had become organizers and then finally palm tops. (Personal Digital Assistant)
The first Palm Pilot was introduced by Palm Computing in 1996. The creators of the palm pilot were Jeff Hawkins, Donna Dubinsky, and Ed Colligan who were the founders of Palm Computing. These inventors were first just trying to invent software that would recognize handwriting for other devices. They then realized they could invent better hardware for their software and the hardware is a handheld device. Their first device was called Pilot 1000 and Pilot 5000 which didn't have many different uses but had Ram size of 128K and 512K, and used the Palm OS operating system. (Palm Pilot)
The palm pilot is used everywhere in business around the world. It has replaced little phone books, calendars, calculator, and many more things. The palm pilot has everything that a business person can want. The can even use the internet with them now so they really don't need a laptop while traveling unless they want or need a bigger screen. In the up coming years I think that the palm pilot will over take the laptop mainly because of its small size and is able to store just as much information as a laptop and could possibly replace secretaries. It is about the size of a 3 by 5 note card and about 3/8 of and inch thick. It has a small screen and comes with a little pen with which you write or press on the screen. With this device the business world is connected to the world and use it to store any information that they need to with any program as long as the software is on their device, get in touch with anyone with a computer, have notes in front of them and be able to read them, know their schedule by looking at a calendar, and a lot more.
- Personal Digital Assistant. Wikipedia encyclopedia. en.pikipedia.org. 29 August 2006. Copyright 2006.
- Palm Pilot. Wikipedia encyclopedia. en.pikipedia.org. 29 August 2006. Copyright 2006.
Acoustic Coupler by Tim Bax
An acoustic coupler is a device couples electric signals-usually into and out of a telephone network. This link is achieved through sound, not a direct electrical connection. Back in the 1960's it was illegal to make an electrical connection to the telephone network. During that time, acoustic couplers were used to connect modems to the telephone network.
The way an acoustic coupler works is relatively simple. A regular telephone handset is placed in a cradle that is designed with rubber seals that fit snuggly around the microphone and earpiece of the handset. The microphone part of the handset acts as a loudspeaker, and a microphone in the earpiece picks up sounds form the loudspeaker. This is how signals were passed in either direction.
The first device used for this type of connection was the Acoustic Data Coupler 300 Modem, which had a speed of 300 bits per second (bps). Today 28,800 bps is considered an extremely slow connection, so you can imagine how slow 300 bps would have been.
Eventually, the telephone industry was deregulated and it became legal to electrically connect to the telephone network. When this happened acoustic couplers started becoming rare. However, they are still used even today when people travel to countries where it is still illegal to electronically connect to the telephone network or when such a connection is not possible or impractical.
Interestingly enough, acoustic couplers are used today to connect computers to the Internet via payphones. Someone with a laptop can park outside a payphone and put the payphone's handset on their coupler and check their email from the road. However with wireless technology on the rise, it looks like the acoustic coupler's days are numbered. It will become even more rare than it already is!
Networking Cards by Ross Williams
In this paper I have chosen to research the history of network cards to show how these cards or networks have changed over the years as technology has increased. First computer networking is stated as the science of communication between two computer systems. These systems or networks usually can be transmitted into one machine such as Bluetooth phone or over the internet. A network card is the piece within the hardware system that allows computers to communicate over a computer network.
In the past these network cards had to be physically put into the computer by hand but now many of the new computers have a network card of some sort already built into the motherboard. Today network cards can perform many different tasks to help the output of the computer system. Most network cards in short have the ability to alert the user when a good connection has been established and what data is being transmitted on this connection.
Over time these networks cards were established to link users to personal area networks (home), local area networks (community), campus area networks (campus), and many other networks that link users of one area or more together. These cards have been developed over time to make process of joining together users on a local network possible. As according to the Wikipedia website, "In 1962 J.C.R Licklider developed a working group he called the "Intergalactic Network" which made it possible for researchers at Dartmouth to develop the time sharing system which distributed users of large computer systems." These to actions then made a path for MIT to create a network that linked telephone connections with each other which then led Paul Baran in 1968 to create datagrams that transmitted data through different computer systems. By 1969 many campuses were connected using the APRAnet to link themselves with each other. Over time many of these details have changed the evolution of a network card and now today we link ourselves together on wireless networks with these cards. Whether the network is personal with one user or wide with many users, network cards link the network together to make the system work. All the information found for this paper came from the Wikipedia encyclopedia for network cards.
Fiber Optic Cable by Chris Handley
Fiber optics is a way of sending information from one point to another using light
as a medium. Its technology is still used today in a variety of ways. Quick
communication has been a goal for thousands of years. Fiber optics has evolved over
several stages to what we now call modern fiber optics. Its creation and innovation have
led to many modern inventions.
The first stage of fiber optics developed back in 1870. John Tyndall proved that
light would use internal reflection to follow a specific path. His experiment was simple.
He took a bucket of water and set it on a table. He then attached a tube to the bucket to
allow water to drain out of the bucket. The other end of the tube was attached to another
full bucket of water that was set on the floor. When he added a light source (the sun) to
the first bucket, he found that the light would travel into the tube and all the way to the
second bucket on the floor (the bucket that was not targeted by the light source) via the
tubing. He proved that light would keep reflecting itself until in reached the end of its
In 1880, Alexander Graham Bell discovered that light could be used to carry
information from one point to another. Unlike John Tyndall, Alexander Graham Bell
used free space light by arranging mirrors to carry the light to its destination. He called
this invention the photophone. A device capable of transmitting the human voice 200
meters away through guided.
Major advances with fiber optic technology came during and after the 1950's.
The fiberscope was the first major advancement. This image-transmitting device was
created using glass tubing for optimal reflection. These early glass tubes had excessive
optical loss, because too much light was being reflected out of the tubing. This motivated
scientists to created glass fibers that included a separate glass coating. This allowed the
light that normally was going to be reflected out of the glass tubing to be reflected back
in by the second layer of glass.
The next major advancement was the development of laser technology. Using
laser diodes and light emitting diodes (LED), they could generate large amounts of light
that were condensed enough to be used in fiber optics. This new technology was capable
of sending large amounts of information using light. It could transfer 10,000 times more
information than the highest radio frequency.
Fiber Optic technology has been the cornerstone for many inventions and
advances in technology. It has been evolving for over a 160 years. There still is much
room for innovation with this technology. It has aided in our quest to quickly transmit
information to specified destinations.
Vacuum Tube by Leigh Anne Rozycki
The vacuum tube was first invented in 1915 by Irving Langmuir. Leading to the invention of the vacuum tube was the research done on several different types of evacuated tubes. These early tubes were either used for specific scientific purposes or they were novelties. An exception to this was the light bulb which has a generalized purpose. From these initial tubes, the diode was created. Diodes allow electric current to change from an alternating current to a direct current. The invention of the diode led to the triode which amplified the current. From the triode came the tetrode and the pentode. Each of these types of tubes is known as a vacuum tube.
The purpose of the vacuum tube is to magnify a signal by controlling the movement of electrons within its envelope space. The envelope is the vacuum casing surrounding the main components of the tube. The each component of the tube is an electrode. Specifically, in the triode they are named the filament, the plate, and the grid. The tube gets plugged into an electric socket which allows it to heat up. When the filament is heated it releases electrons. The plate is positively charged so it attracts the negatively charged electrons from the filament. The grid helps to control the flow or the current of electrons from the filament to the plate. This process needs to be executed within a vacuum so as to protect the flow of electrons from the elements.
Vacuum tubes have several specific uses. For a time, vacuum tubes were used in computers to amplify currents. The first computer that used vacuum tubes contained 18,000 vacuum tubes and was created in 1946. An inherent problem of the tubes is that they burn out frequently. With so many tubes in one computer it was necessary to find a more reliable alternative which lead to the creation of transistors. The transistor served the same purpose as the tube but what cheaper and smaller. The only type of vacuum tube used in computers today is a tube found in the monitor which is the key to creating the monitor display. Other current uses for vacuum tubes include audio systems, some televisions, and microwaves. Even with the progression of technology toward the transistor, vacuum tubes remain an integral piece of technology in today's society.
- "Vacuum tube (http://en.wikipedia.org/wiki/Vacuum_tube)" - Online encyclopedia reference.
- "Vacuum tubes (http://www.bookrags.com/research/vacuum-tubes-csci-01/)" - Online article.
Dumb Terminal by Chinnapong Saicholpitak
Dumb terminal was one of the most important traditional application architectures: working as terminal-host system. The term refers to its depending capacity to the host which in many case were a mainframe computer. Dump terminal was simply a stand-alone machine which fully relies on processing power of the connected host computer. The host machine in turn distributed I/O function back to terminal sites.
Though, terminal-host approach worked, especially with a limited-resource organization, the host computer was frequently overloaded and often resulted in slow response times. Because of the high volume traffic generated by this system and to reduce transmission costs, host usually sent limited detailed information to be shown onscreen.
As I have observed application architecture over the years, I found it very interesting that we are somehow stepping backward in some of the architecture design. Thin client approach in Web application architecture shows a close similarity to terminalhost systems. Thin client presents security and distribution system in its simplicity but resulted as making the browser on the client becoming more like a dumb terminal.
Silicon Chip by Susumu Obara
Integrated Circuit (IC) (also know as IC, microchip, silicon chip, computer chip or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor device, as well as passive components) which has been manufactured in the surface of a thin substrate of semiconductor material. IC created in the silicon surface is called silicon chip.
Integrated Circuit (IC) = Integration of large number of tiny electric circuits, that have specific functions, into a small chip
Integrated circuits (IC) were devised in mid-20th-century, and IC was improved in circuit scale and performance by technology advancements in semiconductor device fabrication. Robert Noyce of Fairchild Semiconductor was awarded a patent for Integrated Circuit made of Silicon on April 25, 1961.
Classification: IC is classified as number of transistors which is integrated on the chip.
|SSI (Small Scale Integration) || ||2 - 100 transistors |
|MSI (Medium Scale Integration) || ||100 - 1000 transistors |
|LSI (Large Scale Integration) || ||1000 - 100k transistors |
|VLSI (Very Large Scale Integration) || ||100K - 10M transistors|
- 1. Exchanging signals between the outside definitely
- 2. Conducting electricity from the outside definitely
- 3. Radiating heat occurred inside immediately
- 4. Being suitable for assembling of a product
- 5. Low cost and good for environment
Only a half century after their development was initiated, integrated circuits have become ubiquitous. Computers, cellular phones, other digital appliances, and manufacturing and transport systems , all depend on the existence of integrated circuits.
Motherboards by Nikki Dagenais
A motherboard is one of the most important components of a computer. The main job of a motherboard is to contain the microprocessor chip of the computer (which is the heart of a computer) and allow everything else to connect to it. This means that the motherboard is the part of the computer that allows all the other parts to receive power and communicate with one another. Essentially it is a piece of equipment that connects all the parts of the computer together.
The form factor is the term used to describe the shape or layout of a motherboard. This affects where the components of the computer will be placed and it affects the shape of the computer itself. Everything that runs on the computer is a part of the motherboard or plugs into it. A motherboard as a stand alone piece of equipment is useless, but every computer needs one to operate. The motherboard has a direct affect on the capabilities of a computer's performance and its potential for upgrades.
Motherboards have improved drastically over the past twenty or so years. The first motherboards were only capable of holding a few components, including a processor and card slots for memory cards. Today motherboards have many more features including Peripheral Component Interconnect's (PCI's), which allow connections for video and sound cards, and a Universal Serial Bus (USB) slot for external hardware to be connected, such as a printer or memory stick. A few other standard components in motherboards today include the Basic Input/Basic Output System chip by which the basic functions of a computer are controlled, and a real time clock chip, by which the system time is maintained.
Teletype by Dyah Apsari
A teletype (teleprinter, telewriter, or TTY) is a now largely obsolete electro-mechanical typewriter which can be used to communicate typed messages from point to point through a simple electrical communications channel, often just a pair of wires.
It is originally a 'hard copy terminal' that printed text slowly in capital letters on rolls of paper. Teletypes were made by Teletype Corporation. When DEC introduced visual display terminals these could operate in Teletype mode, like a 'glass typewriter'. Many communications programs still include a TTY mode to provide the simplest level of communication with a remote computer. This machine was still used until the 1960s or so by the news wire services. A specially-designed telegraph typewriter was used to send stock exchange information over telegraph wires the ticker machines. Some radio stations still use a recording of the sound of one of these machines as background during news broadcasts. These teletypewriters are also still in use by the deaf for typed communications over the telephone, usually called a TDD or TTY.
The teleprinter evolved through a series of inventions by a number of engineers, including Royal E. House, David E. Hughes, Edward Kleinschmidt, Charles Krum and Emile Baudot. A predecessor to the teleprinter, the stock ticker machine, was used as early as the 1870s as a method of displaying text transmitted over wires. The most modern form of these devices is fully electronic and uses a screen instead of a printer.
- History of Telewriter Development by R.A Nelson (http://www.cs.utk.edu/~shuford/terminal/teletype_news.txt)
- Teletype Machines
LA36 DECwriter II by Colin Post
The LA36 DECwriter II was Digital's first commercially successful keyboard terminal and became the industry standard. It provided 30 character per second printing for full utilization of a 300-baud communications line without the use of fill characters. Printable characters are stored in a buffer during the carriage return operation; and while more than one character is in the buffer, the printer operates at an effective speed of sixty characters per second. Adjustable pin-feed tractors take up to 6-part forms, with or without carbon, from 3'' to 14 7/8'' wide. Optional ROMs were available for apl as well as katakana and other sets.
The LA36 DECwriter terminals replaced the ASR-33 Teletypes as the terminal of choice.
Teletype's ASR-33 could only use upper-case letters, numbers, and symbols. It only typed ten characters per second and weighed seventy five pounds. It was also very loud. One would have to raise his / her voice in order to be heard while it was in use.