Planning and designing manufacturing processes and equipment is a main aspect of being an industrial technologist. An Industrial Technologist is often responsible for implementing certain designs and processes. Industrial Technology involves the management, operation, and maintenance of complex operation systems.
Saturday, 5 November 2011
Industrial technology
Planning and designing manufacturing processes and equipment is a main aspect of being an industrial technologist. An Industrial Technologist is often responsible for implementing certain designs and processes. Industrial Technology involves the management, operation, and maintenance of complex operation systems.
Medical technology
Medical Technology encompasses a wide range of healthcare products and is used to diagnose, monitor or treat diseases or medical conditions affecting humans. Such technologies (applications of medical science) are intended to improve the quality of healthcare delivered through earlier diagnosis, less invasive treatment options and reductions in hospital stays and rehabilitation times. Recent advances in medical technology have also focused on cost reduction. Medical technology may broadly include medical devices, information technology, biotech, and healthcare services.
Health technology is:
Health technology is:
==Definition of medical techAny intervention that may be used to promote health, to prevent, diagnose or treat disease or for rehabilitation or long-term care. This includes the pharmaceuticals, devices, procedures and organizational systems used in health care.
The term medical technology may also refer to the duties performed by clinical laboratory professionals in various settings within the public and private sectors. The work of these professionals encompass clinical applications of chemistry, genetics, hematology, immunohematology (blood banking), immunology, microbiology, serology, urinalysis and miscellaneous body fluid analysis. These professionals may be referred to as Medical Technologists (MT) and Medical Laboratory Technologists.Medical Technology extends and improves life. It alleviates pain, injury and handicap. Its role in healthcare is essential. Incessant medical technology innovation enhances the quality and effectiveness of care. Billions of patients worldwide depend on medical technology at home, at the doctor’s, at hospital and in nursing homes. Wheelchairs, pacemakers, orthopedic shoes, spectacles and contact lenses, insulin pens, hip prostheses, condoms, oxygen masks, dental floss, MRI scanners, pregnancy tests, surgical instruments, bandages, syringes, life-support machines: more than 500,000 products (10,000 generic groups) are available today. Medical technology represents only 6,3% of total healthcare expenditure in Europe - a modest share if you consider the benefits for every member of society.— Eucomed.
Friday, 12 August 2011
IT and Indian Agriculture in the Future
Technologically it is possible to develop suitable systems, as outlined in the previous sections, to cater to the information needs of Indian farmer. User friendly systems,particularly with content in local languages, can generate interest in the farmers and others This will facilitate modularisation of the task,it may be useful to consider . This places premium on user friendliness anda major. The Krishi Vigyan Kendras, NGOs
that is targeted is not comfortable with computers
it may be useful to consider touch screen technologies to improve user comfort levels. It is often
observed that touch screen kiosks, with their intuitive approach, provide a means for quick
learning and higher participation. It is also necessary to provide as much content as possible in
local languages.
Once the required application packages & databases are in place,
challenge is with respect to dissemination of the information
and cooperative societies may be used to set up information kiosks. Private enterprise is also
required to be drawn into these activities. These kiosks should provide information on other areas
of interest such as education, information for which people have to travel distances such as those
related to the government, courts, etc. Facilities for email, raising queries to experts, uploading
digital clips to draw the attention of experts to location specific problems can be envisaged.
Internet to make these services are available to all parts of the country The task of creating application packages and databases to cater to complete spectrum of Indian agriculture is a giant task. The Long Term Agriculture Policy provides an exhaustive list of all the areas that are to be covered. This can be taken as a guiding list to evolve
design and develop suitable systems catering to each of the specified areas.
advantage of having a large number of specialised institutions in place catering to various
aspects of Indian agriculture. These institutions can play a crucial role in designing the
necessary applications & databases and services.
better control and help in achieving quick results. As it is, several institutions have already
developed systems related to their area of specialisation.
For quick results, it may be useful to get the applications outsourced to software companies in India. This will facilitate quick deployment of applications and provide boost to the
software industry in India. In order to avoid duplication of efforts,
promoting a coordinating agency which will have an advisory role to play in evolving standard
interface for users, broad design and monitoring of the progress.
Our country has the
In the post WTO regime, it is suggested that it is useful to focus more on some
agricultural products to maintain an unquestionable competitive advantage for exports. This will
call for urgent measures to introduce state of the art technologies such as remote sensing,
geographical information systems (GIS), bio-engineering, etc. India has made rapid strides in
satellite technologies. It is possible to effectively monitor agricultural performance using remote
sensing and GIS applications. This will not only help in planning, advising and monitoring the
status of the crops but also will help in responding quickly to crop stress conditions and natural
calamities. Challenges of crop stress, soil problems, natural disasters can be tackled effectively
through these technologies. A beginning in precision farming can be encouraged in larger tracts
of land in which export potential can be tilted in our country’s favour.
Transmission Media
A transmission medium (plural transmission media) is a material substance (solid, liquid, gas, or plasma) that can propagate energy waves. For example, the transmission medium for sound received by the ears is usually air, but solids and liquids may also act as transmission media for sound.
A transmission medium can be classified as a:
Wireless media may carry surface waves or skywaves, either longitudinally or transversely, and are so classified.
In both cases, communication is in the form of electromagnetic waves. With guided transmission media, the waves are guided along a physical path; examples of guided media include phone lines, twisted pair cables, coaxial cables, and optical fibers. Unguided transmission media are methods that allow the transmission of data without the use of physical means to define the path it takes. Examples of this include microwave, radio or infrared. Unguided media provide a means for transmitting electromagnetic waves but do not guide them; examples are propagation through air, vacuum and seawater.
The term direct link is used to refer to the transmission path between two devices in which signals propagate directly from transmitters to receivers with no intermediate devices, other than amplifiers or repeaters used to increase signal strength. This term can apply to both guided and unguided media.
A transmission may be simplex, half-duplex, or full-duplex.
The absence of a material medium in vacuum may also constitute a transmission medium for electromagnetic waves such as light and radio waves. While material substance is not required for electromagnetic waves to propagate, such waves are usually affected by the transmission media they pass through, for instance by absorption or by reflection or refraction at the interfaces between media.
The term transmission medium also refers to a technical device that employs the material substance to transmit or guide waves. Thus, an optical fiber or a copper cable is a transmission medium.A transmission medium can be classified as a:
- Linear medium, if different waves at any particular point in the medium can be superposed;
- Bounded medium, if it is finite in extent, otherwise unbounded medium;
- Uniform medium or homogeneous medium, if its physical properties are unchanged at different points;
- Isotropic medium, if its physical properties are the same in different directions.
Electromagnetic radiation can be transmitted through an optical media, such as optical fiber, or through twisted pair wires, coaxial cable, or dielectric-slab waveguides. It may also pass through any physical material that is transparent to the specific wavelength, such as water, air, glass, or concrete. Sound is, by definition, the vibration of matter, so it requires a physical medium for transmission, as does other kinds of mechanical waves and heat energy. Historically, science incorporated various aether theories to explain the transmission medium. However, it is now known that electromagnetic waves do not require a physical transmission medium, and so can travel through the "vacuum" of free space. Regions of the insulative vacuum can become conductive for electrical conduction through the presence of free electrons, holes, or ions.
Telecommunications
Many transmission media are used as communications channels.
For telecommunications purposes in the United States, Federal Standard 1037C, transmission media are classified as one of the following:- Guided (or bounded)—waves are guided along a solid medium such as a transmission line.
- Wireless (or unguided)—transmission and reception are achieved by means of an antenna.
In both cases, communication is in the form of electromagnetic waves. With guided transmission media, the waves are guided along a physical path; examples of guided media include phone lines, twisted pair cables, coaxial cables, and optical fibers. Unguided transmission media are methods that allow the transmission of data without the use of physical means to define the path it takes. Examples of this include microwave, radio or infrared. Unguided media provide a means for transmitting electromagnetic waves but do not guide them; examples are propagation through air, vacuum and seawater.
The term direct link is used to refer to the transmission path between two devices in which signals propagate directly from transmitters to receivers with no intermediate devices, other than amplifiers or repeaters used to increase signal strength. This term can apply to both guided and unguided media.
A transmission may be simplex, half-duplex, or full-duplex.
In simplex transmission, signals are transmitted in only one direction; one station is a transmitter and the other is the receiver. In the half-duplex operation, both stations may transmit, but only one at a time. In full duplex operation, both stations may transmit simultaneously. In the latter case, the medium is carrying signals in both directions at same time.
Data storage device
A data storage device is a device for recording (storing) information (data). Recording can be done using virtually any form of energy, spanning from manual muscle power in handwriting, to acoustic vibrations in phonographic recording, to electromagnetic energy modulating magnetic tape and optical discs.
Electronic data storage is storage which requires electrical power to store and retrieve that data. Most storage devices that do not require vision and a brain to read data fall into this category. Electromagnetic data may be stored in either an analog or digital format on a variety of media. This type of data is considered to be electronically encoded data, whether or not it is electronically stored in a semiconductor device, for it is certain that a semiconductor device was used to record it on its medium. Most electronically processed data storage media (including some forms of computer data storage) are considered permanent (non-volatile) storage, that is, the data will remain stored when power is removed from the device. In contrast, most electronically stored information within most types of semiconductor (computer chips) microcircuits are volatile memory, for it vanishes if power is removed.
With the exception of barcodes and OCR data, electronic data storage is easier to revise and may be more cost effective than alternative methods due to smaller physical space requirements and the ease of replacing (rewriting) data on the same medium. However, the durability of methods such as printed data is still superior to that of most electronic storage media. The durability limitations may be overcome with the ease of duplicating (backing-up) electronic data.
Hard disk drive
Interior of a hard disk drive | |
Date invented | 24 December 1954\ |
---|---|
Invented by | An IBM team led by Rey Johnson |
A hard disk drive (HDD; also hard drive or hard disk) is a non-volatile, random access digital data storage device. It features rotating rigid platters on a motor-driven spindle within a protective enclosure. Data is magnetically read from and written to the platter by read/write heads that float on a film of air above the platters.
Introduced by IBM in 1956, hard disk drives have decreased in cost and physical size over the years while dramatically increasing in capacity. Hard disk drives have been the dominant device for secondary storage of data in general purpose computers since the early 1960s. They have maintained this position because advances in their recording density have kept pace with the requirements for secondary storage.[3] Today's HDDs operate on high-speed serial interfaces; i.e., serial ATA (SATA) or serial attached SCSI (SAS).
Memory card
A memory card or flash card is an electronic flash memory data storage device used for storing digital information. They are commonly used in many electronic devices, including digital cameras, mobile phones, laptop computers, MP3 players, and video game consoles. They are small, re-recordable, and able to retain data without power.
USB flash drive
USB flash drives are often used for the same purposes for which floppy disks or CD-ROMs were used. They are smaller, faster, have thousands of times more capacity, and are more durable and reliable because of their lack of moving parts. Until approximately 2005, most desktop and laptop computers were supplied with floppy disk drives, but floppy disk drives have been abandoned in favor of USB ports.
USB Flash drives use the USB mass storage standard, supported natively by modern operating systems such as Linux, Mac OS X, Windows, and other Unix-like systems. USB drives with USB 2.0 support can store more data and transfer faster than a much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox 360, PlayStation 3, DVD players and in some upcoming mobile smartphones.
Nothing moves mechanically in a flash drive; the term drive persists because computers read and write flash-drive data using the same system commands as for a mechanical disk drive, with the storage appearing to the computer operating system and user interface as just another drive. Flash drives are very robust mechanically.
A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case which can be carried in a pocket or on a key chain, for example. The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not likely to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing plugging into a port on a personal computer, but drives for other interfaces also exist.
USB flash drives draw power from the computer via external USB connection. Some devices combine the functionality of a digital audio player with USB flash storage; they require a battery only when used to play music.
Name | Acronym | Form factor | DRM |
---|---|---|---|
PC Card | PCMCIA | 85.6 × 54 × 3.3 mm | No |
CompactFlash I | CF-I | 43 × 36 × 3.3 mm | No |
CompactFlash II | CF-II | 43 × 36 × 5.5 mm | No |
SmartMedia | SM / SMC | 45 × 37 × 0.76 mm | No |
Memory Stick | MS | 50.0 × 21.5 × 2.8 mm | MagicGate |
Memory Stick Duo | MSD | 31.0 × 20.0 × 1.6 mm | MagicGate |
Memory Stick PRO Duo | MSPD | 31.0 × 20.0 × 1.6 mm | MagicGate |
Memory Stick PRO-HG Duo | MSPDX | 31.0 × 20.0 × 1.6 mm | MagicGate |
Memory Stick Micro M2 | M2 | 15.0 × 12.5 × 1.2 mm | MagicGate |
Miniature Card | 37 × 45 × 3.5 mm | No | |
Multimedia Card | MMC | 32 × 24 × 1.5 mm | No |
Reduced Size Multimedia Card | RS-MMC | 16 × 24 × 1.5 mm | No |
MMCmicro Card | MMCmicro | 12 × 14 × 1.1 mm | No |
Secure Digital card | SD | 32 × 24 × 2.1 mm | CPRM |
SxS | SxS | Unknown | |
Universal Flash Storage | UFS | Unknown | |
miniSD card | miniSD | 21.5 × 20 × 1.4 mm | CPRM |
microSD card | microSD | 15 × 11 × 0.7 mm | CPRM |
xD-Picture Card | xD | 20 × 25 × 1.7 mm | No |
Intelligent Stick | iStick | 24 × 18 × 2.8 mm | No |
Serial Flash Module | SFM | 45 × 15 mm | No |
µ card | µcard | 32 × 24 × 1 mm | Unknown |
NT Card | NT NT+ | 44 × 24 × 2.5 mm | No |
Computer networking device
Computer networking devices are units that mediate data in a computer network. Computer networking devices are also called network equipment, Intermediate Systems (IS) or InterWorking Unit (IWU). Units which are the last receiver or generate data are called hosts or data terminal equipment.
List of computer networking devices
Common basic networking devices:
- Router: a specialized network device that determines the next network point to which it can forward a data packet towards the destination of the packet. Unlike a gateway, it cannot interface different protocols. Works on OSI layer 3.
- Bridge: a device that connects multiple network segments along the data link layer. Works on OSI layer 2.
- Switch: a device that allocates traffic from one network segment to certain lines (intended destination(s)) which connect the segment to another network segment. So unlike a hub a switch splits the network traffic and sends it to different destinations rather than to all systems on the network. Works on OSI layer 2.
- Hub: connects multiple Ethernet segments together making them act as a single segment. When using a hub, every attached all the objects, compared to switches, which provide a dedicated connection between individual nodes. Works on OSI layer 1.
- Repeater: device to amplify or regenerate digital signals received while sending them from one part of a network into another. Works on OSI layer 1.
- Multilayer Switch: a switch which, in addition to switching on OSI layer 2, provides functionality at higher protocol layers.
- Protocol Converter: a hardware device that converts between two different types of transmissions, such as asynchronous and synchronous transmissions.
- Bridge Router (B router): CombineS router and bridge functionality and are therefore working on OSI layers 2 and 3.
- Proxy: computer network service which allows clients to make indirect network connections to other network services
- Firewall: a piece of hardware or software put on the network to prevent some communications forbidden by the network policy
- Network Address Translator: network service provide as hardware or software that converts internal to external network addresses and vice versa
- Multiplexer: device that combines several electrical signals into a single signal
- Network Card: a piece of computer hardware to allow the attached computer to communicate by network
- Modem: device that modulates an analog "carrier" signal (such as sound), to encode digital information, and that also demodulates such a carrier signal to decode the transmitted information, as a computer communicating with another computer over the telephone network
- ISDN terminal adapter (TA): a specialized gateway for ISDN
- Line Driver: a device to increase transmission distance by amplifying the signal. Base-band networks only
Computer software
Computer software, or just software, is a collection of computer programs and related data that provide the instructions for telling a computer what to do and how to do it. In other words, software is a conceptual entity which is a set of computer programs, procedures, and associated documentation concerned with the operation of a data processing system. We can also say software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words software is a set of programs, procedures, algorithms and its documentation. Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast to the old term hardware (meaning physical devices). In contrast to hardware, software is intangible, meaning it "cannot be touched". Software is also sometimes used in a more narrow sense, meaning application software only. Sometimes the term includes data that has not traditionally been associated with computers, such as film, tapes, and records.
Examples of computer software include:- Application software includes end-user applications of computers such as word processors or video games, and ERP software for groups of users.
- Middleware controls and co-ordinates distributed systems.
- Programming languages define the syntax and semantics of computer programs. For example, many mature banking applications were written in the COBOL language, originally invented in 1959. Newer applications are often written in more modern programming languages.
- System software includes operating systems, which govern computing resources. Today large applications running on remote machines such as Websites are considered to be system software, because the end-user interface is generally through a graphical user interface, such as a web browser.
- Testware is software for testing hardware or a software package.
- Firmware is low-level software often stored on electrically programmable memory devices. Firmware is given its name because it is treated like hardware and run ("executed") by other software programs.
- Shrinkware is the older name given to consumer-purchased software, because it was often sold in retail stores in a shrink-wrapped box.
- Device drivers control parts of computers such as disk drives, printers, CD drives, or computer monitors.
- Programming tools help conduct computing tasks in any category listed above. For programmers, these could be tools for debugging or reverse engineering older legacy systems in order to check source code compatibility.
Types of software
Practical computer systems divide software systems into three major classes: system software, programming software and application software, although the distinction is arbitrary, and often blurred.
1.System software
System software provides the basic functions for computer usage and helps run the computer hardware and system. It includes a combination of the following:
- Device drivers
- Operating systems
- Servers
- Utilities
- Window systems
2.Programming software
Programming software usually provides tools to assist a programmer in writing computer programs, and software using different programming languages in a more convenient way. The tools include:- Compilers
- Debuggers
- Interpreters
- Linkers
- Text editors
3.Application software
Application software is developed to aid in any task that benefits from computation. It is a broad category, and encompasses software of many kinds, including the internet browser being used to display this page. This category includes:- Business software
- Computer-aided design
- Databases
- Decision making software
- Educational software
- Image editing
- Industrial automation
- Mathematical software
- Medical software
- Molecular modeling software
- Quantum chemistry and solid state physics software
- Simulation software
- Spreadsheets
- Telecommunications (i.e., the Internet and everything that flows on it)
- Video editing software
- Video games
- Word processing
Microprocessors
In the 1970s the fundamental inventions by Federico Faggin (Silicon Gate MOS ICs with self aligned gates along with his new random logic design methodology) significantly affected the design and implementation of CPUs forever. Since the introduction of the first commercially available microprocessor (the Intel 4004), in 1970 and the first widely used microprocessor (the Intel 8080) in 1974, this class of CPUs has almost
completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer
development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual vast success of the now ubiquitous personal computer, the term CPU is now applied almost exclusively to microprocessors. Several CPUs can be combined in a single processing chip.
Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size as a result of being implemented on a single die means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, as the ability to construct exceedingly small transistors on an IC has increased, the complexity and number of transistors in a single CPU has increased dramatically. This widely observed trend is described by Moore's law, which has proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity to date.
While the complexity, size, construction, and general form of CPUs have changed drastically over the past sixty years, it is notable that the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As the aforementioned Moore's law continues to hold true, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model.
Operation
The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and writeback.
The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The location in program memory is determined by a program counter (PC), which stores a number that identifies the current position in the program. After an instruction is fetched, the PC is incremented by the length of the instruction word in terms of memory units.Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below).
The instruction that the CPU fetches from memory is used to determine what the CPU is to do. In the decode step, the instruction is broken up into parts that have significance to other portions of the CPU. The way in which the numerical instruction value is interpreted is defined by the CPU's instruction set architecture (ISA). Often, one group of numbers in the instruction, called the opcode, indicates which operation to perform. The remaining parts of the number usually provide information required for that instruction, such as operands for an addition operation. Such operands may be given as a constant value (called an immediate value), or as a place to locate a value: a register or a memory address, as determined by some addressing mode. In older designs the portions of the CPU responsible for instruction decoding were unchangeable hardware devices. However, in more abstract and complicated CPUs and ISAs, a microprogram is often used to assist in translating instructions into various configuration signals for the CPU. This microprogram is sometimes rewritable so that it can be modified to change the way the CPU decodes instructions even after it has been manufactured.
After the fetch and decode steps, the execute step is performed. During this step, various portions of the CPU are connected so they can perform the desired operation. If, for instance, an addition operation was requested, the arithmetic logic unit (ALU) will be connected to a set of inputs and a set of outputs. The inputs provide the numbers to be added, and the outputs will contain the final sum. The ALU contains the circuitry to perform simple arithmetic and logical operations on the inputs (like addition and bitwise operations). If the addition operation produces a result too large for the CPU to handle, an arithmetic overflow flag in a flags register may also be set.
The final step, writeback, simply "writes back" the results of the execute step to some form of memory. Very often the results are written to some internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but cheaper and larger, main memory. Some types of instructions manipulate the program counter rather than directly produce result data. These are generally called "jumps" and facilitate behavior like loops, conditional program execution (through the use of a conditional jump), and functions in programs. Many instructions will also change the state of digits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, one type of "compare" instruction considers two values and sets a number in the flags register according to which one is greater. This flag could then be used by a later jump instruction to determine program flow.
After the execution of the instruction and writeback of the resulting data, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If the completed instruction was a jump, the program counter will be modified to contain the address of the instruction that was jumped to, and program execution continues normally. In more complex CPUs than the one described here, multiple instructions can be fetched, decoded, and executed simultaneously. This section describes what is generally referred to as the "classic RISC pipeline", which in fact is quite common among the simple CPUs used in many electronic devices (often called microcontroller). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline.
Central processing unit
The central processing unit (CPU) is the portion of a computer system that carries out the instructions of a computer program, to perform the basic arithmetical, logical, and input/output operations of the system. The CPU plays a role somewhat analogous to the brain in the computer. The term has been in use in the computer industry at least since the early 1960s. The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation remains much the same.
On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations, the CPU is housed in a single chip called a microprocessor. Since the 1970's the microprocessor class of CPUs has almost completely overtaken all other CPU implementations. Modern CPUs are large scale integrated circuits in small, rectangular packages, with multiple conecting pins.
Two typical components of a CPU are the arithmetic logic unit (ALU), which performs arithmetic and logical operations, and the control unit (CU), which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary.
Not all computational systems rely on a central processing unit. An array processor or vector processer has multiple parallel computing elements, with no one unit considered the "center". In the distributed computing model, problems are solved by a distributed interconnected set of processors.
Computers such as the ENIAC had to be physically rewired in order to perform different tasks, which caused these machines to be called "fixed-program computers." Since the term "CPU" is generally defined as a software (computer program) execution device, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer.
The idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so that it could be finished sooner. On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would eventually be completed in August 1949.[2] EDVAC was designed to perform a certain number of instructions (or operations) of various types. These instructions could be combined to create useful programs for the EDVAC to run. Significantly, the programs written for EDVAC were stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program, or software, that EDVAC ran could be changed simply by changing the contents of the memory.
Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are made for many purposes. This standardization began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones and children's toys.
While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but elements of the Harvard architecture are commonly seen as well.
electrical relays and vacuum tubes (thermionic valves) were commonly used as switching elements; a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependant on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely.[1] In the end, tube based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs (see below for a discussion of clock rate). Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with.
Output Devices
An output device is any piece of computer hardware equipment used to communicate the results of data processing carried out by an information processing system (such as a computer) to the outside world.
In computing, input/output, or I/O, refers to the communication between an information processing system (such as a computer), and the outside world. Inputs are the signals or data sent to the system, and outputs are the signals or data sent by the system to the outside.
Examples of output devices:
- Speakers
- Headphones
- Screen (Monitor)
- Printer
1.Speakers
Computer speakers, or multimedia speakers, are speakers external to a computer, that disable the lower fidelity built-in speaker. They often have a low-power internal amplifier. The standard audio connection is a 3.5 mm (approximately 1/8 inch) stereo jack plug often color-coded lime green (following the PC 99 standard) for computer sound cards. A plug and socket for a two-wire (signal and ground) coaxial cable that is widely used to connect analog audio and video components. Rows of RCA sockets are found on the backs of stereo amplifier and numerous A/V products. The prong is 1/8" thick by 5/16" long. A few use an RCA connector for input. There are also USB speakers which are powered from the 5 volts at 500 milliamps provided by the USB port, allowing about 2.5 watts of output power.
Computer speakers range widely in quality and in price. The computer speakers typically packaged with computer systems are small, plastic, and have mediocre sound quality. Some computer speakers have equalization features such as bass and treble controls.
The internal amplifiers require an external power source, usually an AC adapter. More sophisticated computer speakers can have a subwoofer unit, to enhance bass output, and these units usually include the power amplifiers both for the bass speaker, and the small satellite speakers.
Some computer displays have rather basic speakers built-in. Laptops come with integrated speakers. Restricted space available in laptops means these speakers usually produce low-quality sound.
For some users, a lead connecting computer sound output to an existing stereo system is practical. This normally yields much better results than small low-cost computer speakers. Computer speakers can also serve as an economy amplifier for MP3 player use for those who wish to not use headphones, although some models of computer speakers have headphone jacks of their own
2.Headphones
Headphones are a pair of small loudspeakers, or less commonly a single speaker, held close to a user's ears and connected to a signal source such as an audio amplifier, radio, CD player or portable media player. They are also known as stereophones, headsets or, colloquially, cans. The in-ear versions are known as earphones or earbuds. In the context of telecommunication, the term headset is used to describe a combination of headphone and microphone used for two-way communication, for example with a telephone.
Headphones may be used both with fixed equipment such as CD or DVD players, home theater, personal computers and with portable devices (e.g. digital audio player/mp3 player, mobile phone, etc.). Cordless headphones are not connected via a wire, receiving a radio or infrared signal encoded using a radio or infrared transmission link, like FM, Bluetooth or Wi-Fi. These are actually made of powered receiver systems of which the headphone is only a component, these types of cordless headphones are being used more frequently with events such as a Silent disco or Silent Gig.
In the professional audio sector headphones are used in live situations by disc jockeys with a DJ mixer and sound engineers for monitoring signal sources. In radio studios, DJs use a pair of headphones when talking to the microphone while the speakers are turned off, to eliminate acoustic feedback and monitor their own voice. In studio recordings, musicians and singers use headphones to play along to a backing track. In the military, audio signals of many varieties are monitored using headphones.
Wired headphones are attached to an audio source. The most common connection standards are 6.35mm (¼″) and 3.5mm TRS connectors and sockets. The larger 6.35mm connector tending to be found on fixed location home or professional equipment. Sony introduced the smaller, and now widely used, 3.5mm "minijack" stereo connector in 1979, adapting the older monophonic 3.5mm connector for use with its Walkman portable stereo tape player and the 3.5mm connector remains the common connector for portable application today. Adapters are available for converting between 6.35mm and 3.5mm devices.
3.Computer monitor
A monitor or display (sometimes called a visual display unit) is an electronic visual display for computers. The monitor comprises the display device, circuitry, and an enclosure. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) thin panel, while older monitors use a cathode ray tube about as deep as the screen size.
The first computer monitors used Cathode ray tubes (CRTs), which was the dominant technology until they were replaced by LCD monitors in the 21st Century.
Originally, computer monitors were used for data processing while television receivers were used for entertainment. From the 1980s onwards, computers (and their monitors) have been used for both data processing and entertainment, while televisions have implemented some computer functionality. The common aspect ratio of televisions, and then computer monitors, has also changed from 4:3 to 16:9.
Technologies
Further information: Comparison CRT, LCD, Plasma and History of display technology
Different image techniques have been used for Computer monitors. Until the 21st century most monitors were CRT but they have been phased out for LCD monitors.(a) Cathode ray tube
The first computer monitors used cathode ray tubes (CRT). Until the early 1980s, they were known as video display terminals and were physically attached to the computer and keyboard. The monitors were monochrome, flickered and the image quality was poor. In 1981, IBM invented the Color Graphics Adapter, which could display four colors with a resolution of 320 by 200 pixels. They introduced the Enhanced Graphics Adapter in 1984, which was capable of producing 16 colors and had a resolution of 640 by 350.
CRT remained the standard for computer monitors through the 1990s. CRT technology remained dominant in the PC monitor market into the new millennium partly because it was cheaper to produce and offered viewing angles close to 180 degrees.
(b)Liquid Crystal
TFT is a variant of liquid crystal display(LCD) which is now the dominant technology used for computer monitors.
The first standalone LCD displays appeared in the mid 1990s selling for high prices. As prices declined over a period of years they became more popular. During the 2000s TFT LCDs gradually displaced CRTs, eventually becoming the primary technology used for computer monitors. The main advantages of LCDs over CRT displays are that LCDs consume less power, take up much less space, and are considerably lighter. The now common active matrix TFT-LCD technology also has less flickering than CRTs, which reduces eye strain.(c) Organic light-emitting diode
Organic light-emitting diode (OLED) monitors provide higher contrast and better viewing angles than LCDs, and are predicted to replace them. In 2011 a 25 inch OLED monitor costs $6000, but the prices are expected to drop.
4.Printers
In computing, a printer is a peripheral which produces a text and/or graphics of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local peripherals, and are attached by a printer cable or, in most newer printers, a USB cable to a computer which serves as a document source. Some printers, commonly known as network printers, have built-in network interfaces, typically wireless and/or Ethernet based, and can serve as a hard copy device for any user on the network. Individual printers are often designed to support both local and network connected users at the same time. In addition, a few modern printers can directly interface to electronic media such as memory cards, or to image capture devices such as digital cameras, scanners; some printers are combined with a scanners and/or fax machines in a single unit, and can function as photocopiers.
Printers that include non-printing features are sometimes called multifunction printers (MFP), multi-function devices (MFD), or all-in-one (AIO) printers. Most MFPs include printing, scanning, and copying among their many features.
Consumer and some commercial printers are designed for low-volume, short-turnaround print jobs; requiring virtually no setup time to achieve a hard copy of a given document. However, printers are generally slow devices (30 pages per minute is considered fast; and many inexpensive consumer printers are far slower than that), and the cost per page is actually relatively high. However, this is offset by the on-demand convenience and project management costs being more controllable compared to an out-sourced solution. The printing press remains the machine of choice for high-volume, professional publishing. However, as printers have improved in quality and performance, many jobs which used to be done by professional print shops are now done by users on local printers; see desktop publishing. Local printers are also increasingly taking over the process of photofinishing as digital photo printers become commonplace. The world's first computer printer was a 19th century mechanically driven apparatus invented by Charles Babbage for his Difference Engine.
A virtual printer is a piece of computer software whose user interface and API resemble that of a printer driver, but which is not connected with a physical computer printer.
Subscribe to:
Posts (Atom)