send mail to support@abhimanu.com mentioning your email id and mobileno registered with us! if details not recieved
Resend Opt after 60 Sec.
By Loging in you agree to Terms of Services and Privacy Policy
Claim your free MCQ
Please specify
Sorry for the inconvenience but we’re performing some maintenance at the moment. Website can be slow during this phase..
Please verify your mobile number
Login not allowed, Please logout from existing browser
Please update your name
Subscribe to Notifications
Stay updated with the latest Current affairs and other important updates regarding video Lectures, Test Schedules, live sessions etc..
Your Free user account at abhipedia has been created.
Remember, success is a journey, not a destination. Stay motivated and keep moving forward!
Refer & Earn
Enquire Now
My Abhipedia Earning
Kindly Login to view your earning
Support
Interestingly, the science of Electronics and the science of Electricity both deal with the study of electric current but with a very thin line of distinction between the two. While, the science of Electricity deals with the ‘current’ mainly, as a form of energy; an energy that can operate lights, motors or some other kind of equipments. Whereas, the science of Electronics treats the so called “current” as a means of carrying or transmitting information. Hence, a branch of Physics that deals with the study of electric current as a means of carrying information from one place to another or otherwise, is called as Electronics. A current or currents that carry information is/are called as signals. These signals may manifest themselves as being sounds, pictures, numbers or even letters and it is the ability inherent in a particular electronic device that makes it capable to change the behavior of an electric current into a respective signal form. Just because although, an electric current as such, when it is flowing steadily and in an unchanging form, it does carry some energy, but to have that energy that is flowing in the form of a current to be converted or somehow varied into a form so that it serves as a signal, it certainly requires a device to do that job and an electronic device does exactly that which may either change the behavior of a current into an analog signal form or into a digital signal for all the practical purposes of a given electronic device. Thus, we say that broadly, these signals in an electronic device or for that matter being converted by an electronic device may be classified into two categories, called as digital and analog signals.
While a digital signal is like an ordinary electric switch – it is either on or off. An analog signal on the other hand, can have any value within a certain range.
Analog signals are widely used to represent sounds and pictures because light levels and the frequencies of sound waves can have any value within a given range. Analog signals can be converted into digital signals and vice versa. For example, compact disc players (CD-players) convert digital sound signals on discs to analog signals for playback through loudspeakers.
It is rather very interesting to note that this ability or virtuosity on the part of an electronic device to convert an electric current into respective signal forms is conferred on it by virtue of its being made up of some specialized material what we call as semiconductors. These semiconductors eventually compose a structure that can be described as the basic structural and functional unit of any electronic device, called as a transistor. This is just analogous to a living ‘cell’ that constitutes the structural and functional unit of a complete human body. As there are billions of individual cells that compose the complete human body similarly, a complete electronic device is made up anywhere, from a hundred thousand to millions of such transistors.
It goes without saying that these transistors still operate millions of stereos, radios and television sets. But engineers can now put more than a hundred thousands of such transistors on a single chip of silicon that is even smaller than that of a fingernail. Such a chip forms an integrated circuit, designated as IC. Chips of this type can be wired together on circuit boards to produce a single electronic equipment that is not only much smaller in size and monetarily, less expensive – but also far more powerful – than ever before.
Today, Electronic devices are commonly used in a large number of applications that formerly relied on mechanical or electric systems for their operations. Examples may range from electronic controls in automatic cameras to electronic ignition systems in cars right up to the electronic control in domestic equipments, such as washing machines.
A transistor makes use of some specialized material without which the science of electronics is a nullity and could not have possibly come into being. This is what we call as semiconductors. These semiconductors are either made up silicon or germanium. Since, these semiconductors have made the IT revolution possible that is why any place being an epicenter of IT industry anywhere in the world today, is generally described and known by the epithet of ‘Silicon Valley’. Therefore, whether we talk of IT industry or electronic industry for that matter, it is essentially the semiconductors that lie at the base of it all. It was in the early 1940’s that a team of three American Physicists namely, John Bardeen, Walter H. Brattain and William Shockely became successful in making the first ever semiconductor diode. Later, in the year 1947, the same trio achieved a breakthrough in the science of electronics by inventing the first ever Transistor. Very soon, around the early 1950’s, the manufacturers began using transistors in the capacity of amplifiers in the devices such as hearing aids and pocket sized radios. Eventually, by the 1960’s, the semiconductor diodes as well as transistors had almost replaced the hitherto used vacuum tubes in most of the electronic gadgets. What followed next was the birth of microelectronics that significantly witnessed a far reduced size of the gadgets by virtue of the evolution of so called integrated circuits (IC’s) and their impregnation in the day’s electronic devices and equipments. The period in the evolutionary history of electronics that saw the emergence and use of semiconductor material in the construction of semiconductor diodes and the like is described as the SOLID-state ERA in the discipline of Electronics. Prior to this, it was the Vacuum tube era that took its birth when in the year 1904, a British scientist, J.A.Fleming built the first ever vacuum tube to be used commercially. It was a twin electrode tube so called a diode tube that could detect radio signals. Later in 1907, the US scientist, Lee De Forest went on to invent and patent a three electrode or so called a triode tube that incidentally became the first ever electronic amplifier to be used in long distance telephone lines being its principal first application. A triode tube as an oscillator was the joint invention of Lee De Forest and the American Radio pioneer, Edwin Armstrong in 1912-1913. Now since, a Vacuum tube to be used as electronic amplifier and oscillator was already in place and the birth of Radio broadcasting in the US was imminent and got off to a scintillating start in the year 1920, the date that also marked the beginning of electronics industry. The knowledge and development of vacuum tubes and their being used as an amplifier & oscillator between 1920-1950’s, now had made it possible to see the inventions of the gadgets and technology like that of TV, electronic computers, radar and films with sound effect etc. Next was the point of culmination of the vacuum tube era that factually was achieved when the first ever general-purpose electronic computer called ENIAC, an acronym for (Electronic Numerical/Integrator and Computer) came into being in the year 1946. ENIAC, astonishingly had over 18000 vacuum tubes in it which had made it a huge and awkward looking electronic machine, but then, it was also the fastest by over 1000 times as against the fastest ever non-electronic computer then in use…
Connecting concepts: What are semiconductors? Any material in terms of its conductivity can either be a conductor or an insulator. But there are some substances that are neither full conductors nor completely bad conductors. Instead, they have their conductivity lying somewhere in between the two. Such substances or materials are referred to as semiconductors. However, it is very interesting to note that semiconductor materials are insulators, if they are very pure especially, at low temperatures, but their conductivity can be greatly increased, by adding tiny but controlled amounts of certain impurities into them and that is done by a process known as doping. They can then be used to make devices such as diodes, transistors and integrated circuits. Some of the most common & familiar examples of semiconductors are the elements like Silicon, Germanium, Selenium & Carbon etc. Of these however, the first two have till date remained to be a bulwark of entire electronics industry. From the view point of doping of such materials, we may obtain two types of semiconductor materials, what we call as n & p type semiconductors. In n-type silicon, the silicon (Si) is doped with phosphorus (P) atoms which thus, increases the number of negative electrons that are free to move through the material. In p-type silicon, boron (B) atoms are used for doping the silicon material. They create gaps called positive ‘holes’, in a material and conduction occurs by electrons jumping from one hole to another. The effect is just as if positive holes were moving in the opposite direction, and for that reason, we usually consider that conduction in a p-type semiconductor is due to positive holes.
A diode can be made by doping a crystal of pure silicon (or germanium) so as to form a region of p-type material in contact with a region of n-type material, the boundary between them being called the junction. The connection to the p-side is the anode and that to the n-side is the cathode. If a p.d. is connected or applied so that the p-type region is positive and hence, repel the positive holes and the n-type negative to produce the same effect. Once done this way, the positive holes drift from P- to n- material across the junction. The diode conducts, has a low resistance and is said to be forward biased. If the p.d. is applied the other way round i.e. p-type region is negative and n-type is positive then, the electrons and holes are attracted to opposite ends of the diode and away from the junction so that there is no flow of charge across it. The diode in this scenario hardly conducts any signal or charge and thus, exhibits a high resistance and hence, is considered to be reverse biased.
In Chemistry, they are described as the members of the Carbon family which are also referred to be as group- 14 elements and besides these three; the group-14 does include the elements like tin & lead. Silicon is by far, the second most abundant element (27.7% by mass) on the Earth’s crust and in nature; it is present in the form of silica and silicates. In terms of the economic significance, silicon has long been a backbone of entire electronic industry though, it does find its applications and use as a component of ceramics, glass and cement etc. In contrast to this, Germanium exists in nature only in traces such that it has made silicon to be the ubiquitous constituent of all IC-chips today. Noted that it is only the ultra pure form of germanium & silicon that are used to make transistors as well as semiconductor devices. While all group-14 members are solids of which silicon exists as a non-metal and germanium is a typical metalloid. As silicon predominantly exists in nature as silica and thus, constitutes almost 95% of the earth’s crust as such. Silica, chemically known as silicon dioxide occurs in several crystallographic forms of which Quartz, cristobalite and tridymite are some of the most prominent crystalline forms of silica which are inter-convertible in nature at suitable temperatures. Silicon dioxide being a three dimensional crystalline solid network has in it each silicon atom bonded tetrahedraly to four oxygen atoms. Each oxygen atom then is again bonded to another silicon atom such that each corner is shared with another tetrahedron. Quartz is noted for having its extensive use as a piezoelectric material such that it has made possible to develop extremely accurate clocks, modern radio, TV broadcasting as well as mobile radio communications. Similarly, silica gel is used as a famous drying agent while, its amorphous form called as Kieselghur is used in filtration plants. Among the silicates that exist in nature, the best known example is that of a mineral what we call as Zeolite, chemically called as aluminosilicates. They have a great commercial significance from the view point of their being used as catalysts in petrochemical industries for cracking of hydrocarbons. ZSM-5 is a typical example of a zeolite that is used to convert alcohols directly into gasoline. Moreover, hydrated zeolites are being used as ion exchangers in softening of hard water.
A “pollution” that has served a grace for the mankind: THE SOLID STATE POLLUTION
Isn’t it a paradox of the kind that a pollution of some kind has not only brought the creature comfort of man in its fold, but also revolutionized the way he used to live earlier…?
Though, it seems like this insofar as the prima-facie meaning of the word ‘pollution’ is concerned. But in practical sense, it certainly has had happened this way and birth of the science of Electronics, better called as solid state Physics, became possible as an offshoot of Physics during the first half of the 20th century. In true sense of the term, solid state physics is largely a study of crystalline solids. In short, they are bodies or substances whose individual atoms or molecules are located regularly in a three dimensional space forming a periodic array of points repeating itself endlessly in a regular rhythm. These points on being joined form a geometrical figure called a cube that exactly constitute a structural unit of the crystalline solid and is called a “cubic unit cell”. A cubic unit cell has its corners occupied by the atoms or molecules and this how a crystal is generated that is in strict solid state Physics terms is nothing, but a lattice of identical cubes endlessly repeated. Consider for instance, a crystal of Germanium that is known to be a brittle white metal unfortunately, acts as a perfect insulator in its pure and perfect state and so is true of its another relative, silicon. Viewed thus, a question arises as to how they have served to be a useful material on which virtually the entire world of electronics has come to be entrenched in? The answer lies in their being polluted with some another material so as to make them a conductor. This polluting of a material like silicon or germanium in the parlance of solid state Physics is called as doping and the material with which they are being doped is called as an impurity. This is how, the science of electronics has made it possible to convert a substance from being an insulator in its pure sate to the one of a semiconductor when being impured or doped with a foreign material such as arsenic, phosphorus or boron and thus, is obtained a substance what we call as semiconductor, a stuff of which the universal transistor is made. This is what to which one regaled researcher in the domain of electronics has described as the “solid state pollution- the backbone of modern electronics.”
Electronics has an all-important role in a country’s development process today. Electronics plays a catalytic role in enhancing production and productivity in key sectors of the economy, whether it relates to infrastructure, process industries, communication, or even manpower training. High-tech areas today depend heavily on electronics.
Electronics is conventionally classified into consumer, industrial, defence, communications and information processing sectors. In recent times, medical electronics and systems for transportation and power utilities have become important segments on their own.
Consumer electronics is the oldest sector of the field which began with the development of radio receivers after the invention of the triode. International competitiveness in this field requires constant innovation. This field has expended remarkable in the last few years with the development of items like compact disc (CD) players, digital audiotape, microwave ovens, washing machines, and satellite television reception systems. All these items, however, make use of advanced technologies and techniques of manufacturing such as semiconductor lesser and microwave devices.
Industrial electronics is oriented towards manufacturing products required by modern industry – process control equipment, numerically controlled machinery and robots and equipments for testing and measurement. Laboratories too require instruments of precision. This field has great potential for growth and development.
Advanced infrastructure in material sciences and sophisticated electronics are both relevant for the defence field where cost is withstand environmental pressures besides being precise and sensitive as well. Defence-electronics is strategic of course; it also has valuable spin offs to offer industry. Bharat Electronics Ltd. (BEL), a defence funded organization, has contributed much to the development of the transistor and television in India.
Communications electronics is a rapidly growing field with much scope for innovation and industrial application. Communications equipment have benefited immensely from the development of efficient semiconductor lasers, optical fibre technology, digital techniques, and powerful microprocessors.
Information technology, again, is clearly dependent on electronics. The integrated circuit is the base of computers which are, in turn, used for designing better very large scale integrated (VLSI) circuits, particularly communication systems, while fast and efficient communications lead to distributed computer networks giving one access to specialized data in a distant computer from one’s workplace itself.
In the medical field, electronics has made possible the ECG (Electrocardiogram) recorder as well as the NMR (Nuclear Magnetic Resonance) scanner besides other measurement equipment.
Although, a transistor still remains to be the structural and functional unit of every electronic device or gadget yet, as the technology advanced and fast paced ahead, the most discernible of all developments in the field of electronics happened is the ever increasing rise in the number of these transistors not only in terms of their sheer number in the whole electronic device, but also with respect to the area of their being packed within a device. This all has boiled down to a remarkable achievement that today, Engineers can put in or pack in over hundred thousand of transistors on a single small silicon chip that probably is even far smaller than that of a fingernail. Such a chip of the silicon on which hundred thousands of transistors have been embedded eventually forms a structure what we call as an integrated circuit (IC). Today, any electronic device is nothing but an assemblage of many of such IC’s that are being wired together on flat plates to constitute what we call as circuit boards. In our common PC, a component called mother board (MB) is nothing, but a circuit board and essentially constitutes the mother of a PC, as it integrates all other functional parts of a computer system…
In essence, the gist of the matter remains that except the primitive era of electronics marked by the use of vacuum tubes, the subsequent generation of electronic devices all had transistor at the heart of their construction and hence, the phase of miniaturization started wherein, same transistors came to be packed in a single and small silicon chip that defines the current 4th generation of electron devices and is being described as the (VLSI integrated circuits era) to the consequence that it not only has resulted in an extraordinary speed of the electronic gadgets, but considerably reduced their size as well. This is what that has once been prophecied by the famous Moor’s law!
One can reasonably say that the course of human development is based on the evolution of human capacity to manipulate objects in finer details. This capability has given birth to the knowledge era where information of the worlds is available just on hand-held computers. Rapid growth requirements in computing capabilities necessitated smaller and smaller transistors so that devices could shrink in size; this is popularly referred to as the Moore’s law which may be stated as, “number of transistors in an integrated circuit double in every 18 months”. This, so called ‘law’ is roughly followed all through the history of integrated circuits. To double in every 18 months, more transistors have to be packed in the same size or devices have to shrink. To what size can they further shrink to…? Well, ultimately to objects, as small as a few atoms or molecules. Thus, the technology of tomorrow needs objects being manipulated at the atomic level and hence, the next generation technology that will ensure the manipulation of the materials or matter at the atomic or even far below that is not far from us, but very much in offing and they have called it “Nanotechnology”…
The birth of Quantum Computer: As the science of electronics passed through the successive & progressive stages of evolution over the years right since its inception, the electronic gadgets of which it comprised the life and blood also underwent through similar stages of evolution alongside it. The result being, the stage of miniaturization wherein the principal functioning and constitution of an electronic gadget remained the same, but it has started becoming smaller in size. This miniaturization has been made possible by an ability on the part of the electronic engineers to pack in over a hundred thousands of transistors on a single silicon chip with the help of a technique what came to be known as “Photolithography” and hence, the construction of the so called ICs. The same miniaturization did happen to computer industry very magnificently considering the fact that the first ever computer named ENIAC, built by two US engineers, J.P. Eckert and J.W. Mauchly in the year 1946. ENIAC, an acronym for (Electronic Numerical Integrator And Computer) was a huge machine consisting of over 18000 on-and-off switches called vacuum tubes. It weighed around 30 tons and easily gobbled up 150 kilowatts of power, but could hardly store above 700 bits of information in its memory. Since the birth of ENIAC, the computer technology has advanced greatly by way of increasing miniaturization of its basic unit, the vacuum tube. To begin with, this miniaturization was achieved by replacing the pretty big vacuum tubes with the ubiquitous transistor and now by the silicon chip. Interestingly, the silicon chip even is not the last word in miniaturization and logically, the question arises: then, what next?
A big thanks here goes at the outset, to the new phenomenon called SUPERCONDUCTIVITY discovered by the scientists in certain materials that exhibit almost zero or nil electrical resistance at temperatures as low as absolute zero that is much below the freezing point of water such that it has now been possible to make a 1000 times small or compact electronic device called as ‘Cryotron’. A fair idea about the relative compactness of the computer devices, right when the first computer was made up to the present era when we can think of making use of a thing like cryotron to construct a computer system, can be had by considering the fact that a cubic foot of space could barely house a few hundred vacuum tubes while, the same space can comfortably accommodate a few thousand transistors, to several hundred thousand of silicon chips and of course, a several millions of so called cryotrons. Incidentally, the computer engineers are not content with and confining merely to the limit of cryotrons only, they are thinking far ahead of this even now so as to cross even the cryotron limit of miniaturization. And the birth of Quantum Computers is very much next on their agenda and probably, this will be the last and ultimate stage of miniaturization that a man could ever achieve in the world of electronics. The advent of quantum computing or quantum computers as such, will owe their origin to the upcoming, high end science of nanotechnology that could make it possible to manipulate and engineer the matter, both living and non living at the nanometer scale.
It may be noted that Nanotechnology and nanoscience came into prominence in the early 1980s with two major developments: the birth of cluster science and the invention of the scanning tunneling microscope (STM). This development led to the discovery of fullerenes in 1985 and carbon nanotubes, a few years later. In another development, the synthesis and properties of semiconductor nano-crystals was studied which led to a fast increasing number of metal oxide nano-particles of “quantum dots” and hence, the birth of so called “quantum computers” became an imminent possibility. Later, in the 1990s, with the invention of Atomic Force Microscope (ATM) coupled with the starting of the United States National Nanotechnology Initiative in 2000, the mission nanotechnology really got a shot in its arm for further development and expansion…
Connecting concepts: What is ‘superconductivity’?
Certain conductors do not resist the flow of electric current through them. They exhibit a phenomenon called as ‘superconductivity’. These conductors are called superconductors and as such, they are repelled by magnetic fields and hence, exhibit what we call as diamagnetism. It may however, be noted that Superconductivity is manifested by such superconducting materials only below a certain critical temperature as well as a critical magnetic field, which in fact vary from material to material. Before 1986, the highest critical temperature was 23.2 K (-249.80C/-417.60F) in niobium-germanium compounds. An expensive and somewhat inefficient coolant such as liquid helium was used to achieve such a low temperature. Later, it was found that some ceramic metal-oxide compounds containing rare earth elements were superconductive even at a higher temperature. In those instances, less expensive and more efficient coolant liquid nitrogen was used at 77 K (-1960C/-3210F). In 1987, the composition of one of these superconducting compounds, with critical temperature of 94 K (-1790C/-2900F), was revealed to be YVa2Cu3O7 (yttrium-barium-copper oxide). It has since been shown that rare-earth elements, such as yttrium, are not an essential constituent, for in 1988 a thallium-barium-calcium-copper oxide was discovered with a critical temperature of 125 K (1-1480C/-2340F).
The term ’nano’ comes from the Latin word for ‘dwarf’ and in scientific terminology refers to a nanometer (nm). One nanometer is a millionth of a millimeter and one-billionth of a meter (10-9m). A single human hair is around 80,000 nanometers in width.
The term ‘nanotechnology’ was used in 1959 by Nobel laureate physicist, Richard Feynman, though; it was Eric Drexler who actually worked with the minutest of particles – 1-100 nanometer in size. Nanoscience involves manipulation of materials at atomic, molecular and macromolecular scales, where properties significantly differ from those at a larger scale. According to Professor Norio Taniguchi of Tokyo Science University, “nano- technology” mainly consists of the processing, separation, consolidation, and deformation of materials by one atom or by one molecule.” In the 1980s the basic idea of this definition was explored in much more depth by Dr. K. Eric Drexler, who promoted the technological significance of nano- scale phenomena and devices through speeches and books.
According to a study published on November 1, 2007 nanoscale computing is all set to usher in personal and industrial data storage, Professor Albert Fate, who won the Nobel Prize for Physics in October 2007 with German Peter Gruenberg, has showcased the potential of a new generation of disk drives which promise to boost data storage by a factor of a hundred, The Magnetic Random Access Memory (MRAM) could essentially collapse the disk drive and computer chip into one, vastly expanding both processing power and storage capacity. MRAM potentially combines key advantages such as non-volatility, infinite endurance and fast random access-down to five nanoseconds read/write time- that make it a likely candidate for becoming the ‘universal memory’ of nanoelectronics.
Researchers at Hewlett – Packard have shown that nanoscale circuit elements called memristors, previously, built into memory devices, can perform full Boolean logic, like in computer processors. Memristor logic devices are quite smaller than devices made from transistors, enabling packing more computing power into a given space. Memristor arrays performing both logic and memory functions would eliminate transferring data between a processor and a hard drive in future.
Given in the box herein below, a quick glance at the evolutionary sequence of “Computeronics”…
First Generation (mechanical or electromechanical):
Calculators: Difference Engine
Programmable devices: Analytical Engine
Second Generation (Vacuum tube):
Calculators: IBM 604, UNIVAC 60
Programmable devices: Colossus, ENIAC, EDSAC, EDVAC, UNIVAC 1, IBM 701/702/650/Z22.
Third Generation (discrete transistors and SSI, MSI, LSI Integrated circuits):
Mainframes: IBM7090, System 360
Minicomputers: PDP-8, System 36
Fourth Generation (VLSI Integrated circuits):
Microcomputer: VAX, IBM System-1
4-bit: Intel 4004/4040
8-bit: Intel 8008/8080/Motorola 6800/Zilog 280
16-bit: Intel 8088/Zilog Z8000
32-bit: Intel 80386/Pentium/Motorola 68000
64-bit: x86-64/Power PC/MIPS/SPARC
Embedded: Intel 8048, 8051
Personal Computer: Desktop, Laptop, SOHO, UMPC, Tablet PC, PDA.
Fifth Generation: Presently the experimental or theoretical computing and artificial intelligence are making grounds. Some of them are Quantum computers, DNA computing, Optical computers….
Universal Computers Characteristics: Calculation, Speed, Storage, Retrieval, Accuracy, Versatility, Automation, Diligence (no fatigue), File and Data Exchange, Networking, etc.
Algorithm is a repetitive routine which, if followed, is guaranteed to solve a problem assuming, of course, that the problem has a solution. An example is that of the simple routine for searching a word in a dictionary. One scans the list of words in alphabetical order till one finds it. The dictionary provides a set of possible solutions in some order and we test each word in succession till the required one is found.
The above so called an algorithm of our illustration is typical of any problem solving process that a computer has been conditioned to. There is a generator that produces possible solutions say for example, the word list in the above case– with a test procedure for recognizing the solutions. In the above case, the search is guaranteed to succeed in a reasonable time such as we can afford. Given thus, we can say that the above process is therefore, amenable to a computer operation even though the word search in a list involves handling of purely non-numerical symbols as the computer can manipulate them as readily as numbers. Therefore, all that need be done is to programme the computer to follow the prescribed algorithm or recursive routine of scanning possible solutions produced by generator process, test them in succession and terminate the operation when the solution is in hand.
The computer nevertheless, owes its success in accomplishing such computational or algorithmic tasks to its high speed operation so that so long as some way of performing a task can be specified, it does not really matter how laborious is the ensemble of operations if carried out by us as humans beings.
No doubt then that it is the lightning rapidity of a computer operation that makes even its most devious or tricky computational roundabouts quicker than many an apparent crow flight…
Artificial Intelligence (AI) is an attempt to make machines which amplify our cerebral power exactly as a diesel engine amplifies our muscle power. They are designed to emulate human mental faculties such as language, memory, reasoning, common sense, speech, vision, perception and the like. We are still far from making them despite the recent spectacular advances in computer technology. The reason is that human intelligence has not yet been sufficiently introspective to know itself. When at last we have discovered the physical structures and neurophysiological principles underlying the intelligence shown by a living brain, we shall have also acquired the means of simulating it synthetically.
What we have done so far is to make a very crude simulation of the living brain by assembling simple neural substitutes of digital kind like on-and-off mini-switches called transistors or silicon chips. As a result the comparison of networks of electrical hardware we call computers and of biological neurons we call animal brains is our most puzzling paradox. Though they are as closely allied in some respects as next-of-kin in others they are as far apart as stars and sputniks.
Consider, to start with, the resemblances. In performing a mental task, both resort to three principles of choice to decide the future course of action, of feedback to self-rectify errors, and of redundancy in coding as well as in components to secure reliability. These resemblances at first encounter appeared so striking as to earn the computers the nickname of electronic brains. But we now know better. Alas! The divergences diverge much more radically than the coincidences coincide. The most conspicuous departure is in respect of methods of storage, recall and processing of information. Because the methods of data storage, access and processing practiced in computer engineering today are very primitive vis-à-vis their infinitely more complex but unknown analogues of the living brain the computer has not yet come of age to wean itself away from the tutelage of algorisms. That is, it can only handle such task as can be performed by a more or less slavish follow-up of prescribed routines. The living brain, on the contrary, operates essentially by non-algorithmic methods bearing little resemblance to the familiar rules of logic and mathematics that are built into a computer.
It seems that the language of the brain is logically much “simpler” than another we have been able to devise so far. As von Neumann once remarked, we are hardly in a position even to talk about it, much less to theorize about it. We have not yet been able to map out the activities of the human brain in sufficient detail to serve as a foothold for such an exercise. For too long we have been content with the superficial analogy between the living brain and digital computers merely because both are hard wired networks wherein information is transmitted as digitally coded electrical signals. But in the living brain further transmission of information across the synapses (junctions of neurons), is no longer digital. It is not all-or-nothing response as it is in computers. It is finely graded response secured by subtly controlled release of neurotransmitters, neurochemicals, which have fast, but brief effects on very limited targets. In the past ten years it has been realized that an enormous number of substances are involved in synaptic communication deftly modulating the transmission process. These substances are of two kinds as what we know of them today called as neurotransmitters and neuromodulators.
Unlike the fast transmitters, most of the neuromodulators are slow acting, long lasting and often work at long range in very complex ways. Their discovery has given rise to a new concept of the nervous system as an array of hard-wired circuits constantly tuned by chemical signals totally unlike our telephone exchanges or computers. It is this chemical modulation that gives the living brain its flexibility and adaptability to long-term changes both internal and environmental. Recent research has revealed two main points of departure between the computer and the brain. First, the basis of the latter’s long-term memory is very unlike that of the computer. It seems to be the result of changes in protein synthesis brought about by neuromodulators. They do so by producing permanent changes at synapses and by regulating receptor density, which might alter a neuron’s sensitivity to transmitters. Second, we have to study the living brain at three different levels-the molecular, cellular and organismic. Unfortunately it is very difficult to switch from one level to another. Thus, it is very hard to link studies of the molecular constituents of membranes, for example, to whole animal behaviour. The more we see of one level, the more we block the view of others.
As a result, AI research seems to have now reached a blind alley. It cannot make a breakthrough by doing more of the same. It will take long to leap out of it. For we are yet very far from completing the ever-increasing catalogue of neuroactive substances modulating the signals at synapses let alone unraveling the complexity of their actions. It alone is likely to provide the unifying factor so far lacking in neurobiology. Its provision is the open sesame to AI…
There are a number of non-computational tasks like writing poetry, composing music, recognizing patterns, proving theorems, etc, which the computer is as yet unable to handle. Although in these cases too, problem solving processes or algorithmic routines can be devised but unfortunately, they offer no guarantee of success. Consider for example, Swift Lagado’s routine of writing “books on philosophy, poetry, politics and so on” described at length in his Gulliver’s Travels. No computer however rapid it might be could follow it through to success simply because; the number of possible solutions that the routine generates rises exponentially. It is the same with all other complex problems such as playing games, proving theorems, recognizing patterns, and the like, where we may be able to devise a recursive routine to generate possible solutions and a procedure to test them. The search fails because of the overwhelming bulk of eligible solutions that have to be scanned for test.
The only way to solve such non-algorithmic problems by mechanical means is to find some way of reducing ruthlessly the bulk of possibilities under whose debris the solution is buried. Any device, stratagem, trick, simplification, or rule of thumb that does so is called a heuristic. For example, printing on top of every page of a dictionary the first and last word appearing there is a simple heuristic device which greatly lightens the labour of looking for the word we need. “Assume the answer to be X and then proceed backwards to obtain an algebraic equation is another instance of heuristics for divining arithmetical riddles like those listed in the Palatine Anthology or Bhaskara’s Lilawati. Drawing a diagram in geometry and playing trumps in bridge ‘when in doubt” are other instances of heuristic devices that often succeed. In general, by limiting drastically the area of search, heuristics ensure that the search does terminate in a solution most of the time even though there is no guarantee that the solution will be optimal.
Indeed, a heuristic search may fail altogether. For, by casting aside a wide range of possibilities as useless. Consequently, a certain risk of partial failure, such as sometime missing the optimal solution, or even of total failure in a heuristic search has to be faced, nevertheless, resort to heuristics for solving problems more complex than computation and not amenable to algorithmic routines is inescapable.
A simple stick-on circuit can monitor heart rate and muscle movements as well as conventional medical monitors, but with the benefit of being weightless and almost completely undetectable.
It may soon be possible to wear your computer or mobile phone under your sleeve, with the invention of an ultra-thin and flexible electronic circuit that can be stuck to the skin like a temporary tattoo. The devices, which are almost invisible, can perform just as well as more conventional electronic machines but without the need for wires or bulky power supplies, scientists said.
The development could mark a new era in consumer electronics. The technology could be used for applications ranging from medical diagnosis to covert military operations.
Fig : The patch of electronic skin consists of an array of electrical devices for monitoring the vital signs of the body
The “epidermal electronic system” relies on a highly flexible electrical circuit composed of snake-like conducting channels that can bend and stretch without affecting performance. The circuit is about the size of a postage stamp, is thinner than a human hair and stick to the skin by natural electrostatic forces rather than a glue or through any other conventional adhesive.
“We think this could be an important conceptual advance in wearable electronics, to achieve something that is almost unnoticeable to the wearer. The technology can connect you to the physical world and the cyber world in a very natural way that feels comfortable,” said Professor Todd Coleman of the University of Illinois at Urbana-Champaign, who led the research team.
A simple stick-on circuit can monitor a person’s heart rate and muscle movements as well as conventional medical monitors, but with the benefit of being weightless and almost completely undetectable. Scientists said, it may also be possible to build a circuit for detecting throat movements around the larynx in order to transmit the information wirelessly as a way of recording a person’s speech, even if they are not making any discernible sounds.
Tests have already shown that such a system can be used to control a voice-activated computer game and one suggestion is that a stick-on voice box circuit could be used in covert police operations where it might be too dangerous to speak into a radio transmitter.
“The blurring of electronics and Biology is really the key point here,” said Yonggang Huang, Professor of Engineering at Northwestern University in Evanston, Illinois. “All established forms of electronics are hard, rigid. Biology is soft, elastic. It’s two different worlds. This is a way to truly integrate them.”
Engineers have built test circuits mounted on a thin, rubbery substrate that adheres to the skin. The circuits have included sensors, light-emitting diodes, transistors, radio frequency capacitors, wireless antennas, conductive coils and solar cells.
“We threw everything in our bag of tricks on to that platform, and then added a few other new ideas on top of those, to show that we could make it work,” said John Rogers, Professor of Engineering at the University of Illinois at Urbana-Champaign, a lead author of the study published in the journal Science…
Television, as you know, is a rather complicated process. Whenever such a process is developed, you can be sure, a great many people had a hand in it and it goes for back for its beginnings. So television was not “invented” by one man alone.
The chain of events leading to television began in 1817, when a Swedish chemist named Jons Berzelius discovered the chemical element “selenium.” Later it was found that the amount of electrical current selenium would carry depended on the amount of light which struck it. This property is called “photoelectricity.”
In 1875, this discovery led a United States inventor, G.R. Carey, to make the first crude television system, using photoelectric cells. As a scene or object was focused through a lens onto a bank of photoelectric cells, each cell would control the amount of electricity it would pass on to a light bulb. Crude outlines of the object that was projected on the photoelectric cells would then show in the lights on the bank of bulbs.
The next step was the invention of “the scanning disk” in 1884 by Paul Nipkow. It was a disk with holes in it which revolved in front of the photoelectric cells and another disk which revolved in front of the person watching. But the principle was the same as Carey’s.
In 1923 came the first practical transmission of pictures over wires, and this was accomplished by Baird in England and Jenkins in the United States. Then came great improvements in the development of television cameras. Vladimir Zworykin and Philo Farnsworth each developed a type of camera, one known as “the inconoscope” and the other as “the image dissector.”
By 1945, both of these camera pickup tubes had been replaced by “the image orthicon.” And today, modern television sets use a picture tube known as “a kinescope.” In this tube, an electric gun which scans the screen in exactly the same way, the beam does in the camera tube to enable us to see the picture.
Of course, this doesn’t explain in any detail exactly how television works, but it gives you an idea of how many different development and ideas had to be perfected by different people to make modern television possible.
Colour television pictures are created by additive mixture of the three primary colours in light. Light from the picture is focused on the colour TV camera by a lens. When the light reaches the camera, it is split into three beams by a set of special mirrors. Each beam of light is then passed through a filter to produce three separate beams of red, green and blue light. These beams are then directed to three separate camera tubes which produce signals for each of these colours. These are further processed to produce signals defining the brightness, line and saturation of the scene. These three signals are eventually broadcast on one carrier wave. A colour TV screen has thousands of tiny areas that glow when struck by a beam of electrons. Some areas produce red light, others produce green light, and still others produce blue light. When we watch a colour programme, we do not actually see each red, green or blue area. Instead, we see a range of many colours produced when the red, green and blue lights blend in our vision. We see white light when certain amounts of red, green and blue light are combined. The combining of the primary colours to produce white light makes it possible for a colour TV to show black-and-white pictures.
It has been found that an image of any object generally lasts on the retina of the eye for about one-tenth of a second after the object has disappeared. This very nature of the image effect on the retina has in fact, made possible the production of the so called “motion pictures.” Twenty-four separate pictures each being slightly different from the previous one are projected on to the screen per second and thus, give the impression of continuity of a motion picture. This very effect that the rapidly moving pictures on a TV screen create on our retina is referred to as “persistence of vision.” In a television receiver, twenty-five complete pictures are produced every second.
The traditional curved television screen, known as the cathode ray tube (CRT) that has remained familiar to all of us until the recent past as the well known face of our household television sets and may still be the ubiquitous electronic product in most of our homes. But owing to sweeping technological advancements in the electronic industry of late, it has slowly started giving way to a new breed of flat screen TV-sets which can virtually be hung like a painting on the wall without there being that back bulge which used to be a common morphological feature of our traditional TV sets. The birth of these flat screen, wall painting like TVs has been made possible because of fresh & rapid developments in the otherwise, quite an old technology what we call as liquid crystal display (LCD) technology. Before the advent of flat screen TV era, LCD technology has hitherto been used commonly in calculators and electronic watches. Probably, the very use of LCD technology in this new breed of TV-sets has given them the name of LCD- TVs in the market place.
Liquid crystals have been known for long. In 1988, Reinitzer, an Austrian botanist found that though the organic chemical called cholesteryl benzoate melted at a temperature of 1450C at which it formed a cloudy liquid which became clear only on further heating to about 1730C. The phase between the above two temperatures was given the name ‘liquid crystal’.
Infact, it has been noted that the liquid crystalline phase occurs in such solids that are noted for having two or more melting points. Liquid crystals also flow and take the shape of the container; however, the molecules of such a liquid tend to maintain their position approximately parallel to one another in arrays of one or two dimensions instead of remaining packed in a haphazard manner. ‘Nematic’ liquid crystal is the simplest type. The most important technological application of liquid crystals is in digital display. In the watch or calculator, two glass sheets are coated on their inner surfaces with a transparent film of electrically conductive material. In 1963 it was discovered that an electric change passing through a liquid crystal causes the molecules to realign and rechannel the light waves falling upon them.
Characters or images are rendered visible by modifying ambient light. Very little electricity is used by the LCD. However, it was found that as the display became bigger, the contrast between light and dark diminished. In the first generation of LCD screens, a picture is created by applying voltages to rows and columns of liquid crystals. The use of thin film transistor technology had led to what is known as active matrix LCDs. In this system, a tiny transistor is added to each cell to augment its charging power. Thus each picture element or pixel is switched on and off by its own transistor. Several thousand electronic elements are printed on a sheet of chemically heated glass, producing an electronic mesh, or ‘active matrix’. Advanced LCD has been utilized by the Japanese to become basic to their electronics strategy. Today, the LCDs are key components in portable computers, laptops, hand-held organizers, video games and virtual reality devices etc…
Light has for long been used as a medium for transfer of information – in the form of fires, lamps, light signals. But in the matter of long distance communications, light, as one perceives it, poses problems: it does not go round corners because, it always travels in a straight line, but corners need to be negotiated if long distances are to be spanned. Furthermore, atmosphere as a medium for transmission of light would be unsuitable as dust, fog, rain, etc., cause heavy attenuation owing to absorption, dispersion and diffraction, limiting the range of even line-of-sight transmissions to just a few kilometers. This serious limitation called for a device which could entrap light within itself and which could be flexed to negotiate physical obstacles. The other requirement was the availability of a source of light of high intensity. In 1970, a breakthrough was achieved when optical fibres with losses of the order of 20 decibels per kilometer were developed. The invention of lasers and light emitting diodes (LEDs) solved the problem of the light source. The continuing research activities finally gave birth to optical fibre based communication systems.
The velocity of light in various transparent media varies. A medium in which light velocity is lower is termed as denser than the other in which the light travels faster that may be called as the rare media. A light ray on crossing from a dense medium into a less dense medium bends away from the normal of incidence. If one keeps on increasing the angle of incidence (i.e., the angle between the ray of light and the normal of incidence), then a stage shall be reached when the light, instead of being refracted, will undergo reflection. This phenomenon is known as ‘total internal reflection’. If on such reflection, the reflected ray again meets a surface separating a transparent medium identical to the one described above, then it will again undergo total internal reflection. Optical fibres are manufactured to satisfy this condition enabling light to remain trapped inside the fibre while it travels along the longitudinal length of the fibre.
The basic building blocks of a fibre optic communication system; (a) a transmitter consisting of a light source and associated drive circuitry; (b) fibre encased in a cable to provide a practical method of spanning long distances with ease and safety; and (c) a receiver made up of detector, amplifier and other signal restoring circuitry.
Injection Laser Diodes (ILDs) and LEDs are the light sources used in optical fibre communication. These devices can be directly modulated by varying their input currents in consonance with the electrical signals to be transmitted. Alloys of gallium aluminum and arsenide are used for light sources operating in 900 nm band. For wavelengths of 1100 nm to 1600 nm, alloys of indium, gallium, arsenide, and phosphorus are used. Laser diodes are used for long distance communication since they produce coherent light beams that are highly monochromatic. Optical fibre is made up of a cylinder at the centre, i.e., at the ‘core’. The fibre is a thin hair like thread made of glass or plastic. The core is surrounded by a material layer, the ‘cladding’. The core has a specific refractive index which is considered while making a choice for a particular application. Due to the differences in refractive indices between the core and the cladding, the light signal is trapped inside the core (due to total internal reflection) even if the wire is bent. There are two types of fibre optics depending on the size of the core – the single mode fibre (with core of 10 micron diameter) and the multi-mode fibre (core of about 50 micron diameter). Composite optical fibre cables contain copper wires also for providing power to repeaters. Cables may be unarmoured or armoured with steel tape or steel wires. Different types of optical fibre cables are manufactured according to specific applications.
The photo-detector senses the luminescent power falling upon it and converts the power variation into corresponding varying electric currents. Semiconductor photo-detectors that are small in size and possess high sensitivity and fast response time are used: PIN photodiode and avalanche photodiode (APD) being the two types most generally used. Silicon, germanium and indium-gallium arsenide are the materials used in making photodiodes. Even though the fibre is extremely pure, there are minuscule impurities that absorb light, and light may be scattered due to variations in composition and molecular density. Bending also causes a slight loss of energy. Light pulses overlap while traveling, causing difficulty in separating them at the receiver end. Despite all this, fibre optics communications has distinct advantages.
Optical fibre communications systems provide the following major advantages over other conventional systems:
(a) Optical fibre has an extremely wide bandwidth, permitting several times larger volume of information than is possible through conventional means. It is possible to encapsulate several hundred fibres in one single cable.
(b) Repeater distances of the order of 80 km to 100 km are easily possible today.
(c) The higher channel capacity, larger repeater distances, and extremely how maintenance cost make the systems cost effective.
(d) Optical fibre cables are 25 times lighter in weights and at least 10 times reduced in size as compared to conventional cables. This makes their transportation and installation much easier. It also makes them attractive for airborne, ship borne and space applications.
(e) Optical fibre communications are free from radio frequency interference, and are immune to electromagnetic interference and electromagnetic pulses. This results in unprecedented high fidelity.
(f) The communication is almost error-free.
(g) It is well nigh impossible to intercept optical fibre communication without being detected.
(h) The system requires simple maintenance. Very high reliability is achieved.
(i) Power consumption is extremely low.
The story of the invention of the telephone is a very dramatic one. (No wonder they are able to make a movie about it!) But first let’s make sure we understand the principle of how a telephone works.
When you speak, the air makes your vocal cords vibrate. These vibrations are passed on to the air molecules so that sound waves come out of your mouth, that is, vibrations in the air. These sound waves strike an aluminum disk or diaphragm in the transmitter of your telephone. And the disk vibrates back and forth in just the same way the molecules of air are vibrating.
These vibrations send a varying, or undulating, current, over the telephone line. The weaker and stronger currents cause a disk in the receiver at the other end of the line to vibrate exactly as the diaphragm in the transmitter is vibrating. This sets up waves in the air exactly like those which you sent into the mouthpiece. When these sound waves reach the ear of the person at the other end, they have the same effect as they would have if they came directly from your mouth!
Now to the story of Alexander Graham Bell and how he invented the telephone. On June 2, 1875, he was experimenting in Boston with the idea of sending several telegraph messages over the same wire at the same time. He was using a set of spring-steel reeds. He was working with the receiving set in one room, while his assistant, Thomas Watson, operated the sending set in the other room.
Watson plucked a steel reed to make it vibrate, and it produced a twanging sound. Suddenly Bell came rushing in, crying to Watson: “Don’t change anything. What did you do then? Let me see.” He found that the steel rod, while vibrating over the magnet had caused a current of varying strength to flow through the wire. This made the reed in Bell’s room vibrate and produce a twanging sound.
The next day the first telephone was made and voice sounds could be recognized over the first telephone line, which was from the top of the building down two flights. Then, on March 10 of next year, the first sentence was heard: “Mr. Watson, come here, I want you.”
A basic telephone set contains a transmitter that transfers the caller’s voice; a receiver that amplifies sound from an incoming call; a rotary or push-button dial; a ringer or alerter; and a small assembly of electrical parts, called the antisidetone network, that keeps the caller’s voice from sounding too loud through the receiver. If it is a two-piece telephone set, the transmitter and receiver are mounted in the handset, the ringer is typically in the base, and the dial may be in either the base or handset. The handset cord connects the base to the handset, and the line card connects the telephone to the telephone line.
These are all methods for transmitting text rather than sounds. These text delivery systems evolved from the telegraph. Teletype and telex systems still exist, but they have been largely replaced by facsimile machines, which are cheaper and better able to operate over the existing telephone network. The teletype, essentially a printing telegraph, is primarily a point-to-multipoint system for sending text. The teletype converts the same pulses used by telegraphs into letters and numbers, and then prints out readable text. It was often used by news media organizations to provide newspaper stories and stock market data to subscribers. Telex is primarily a point-to-point system that uses a keyboard to transmit typed text over telephone lines to similar terminals situated at individual company locations. Facsimile transaction now provides a cheaper and easier way to transmit text and graphics over distances. Fax machines contain an optical scanner that converts text and graphics into digital or machine-readable codes. This coded information is sent over ordinary analog telephone lines through the use of a modem included in the fax machine. The receiving fax machine’s modem demodulates the signal and sends it to a printer also contained in the fax machine itself.
‘Wireless’ is once again at the forefront of communication systems. Today people are on the move constantly; the room-bound telephone is not of much use to these mobile people. The cordless telephone was the first step in giving mobility to the telephone. Now cellular mobile telecommunication technology has been developed.
In this system, the total area of operation is divided into a network of small areas called cells, varying in size from 0.5 to 40 km diameter. Each cell has a trans-receiver (called the ‘cell site’ or the ‘radio base station’ to transmit and receive calls within that cell. A subscriber moving from one cell to another has his call transferred to the transreceiver of the next cell without a break in the call.
The above so called “cells” are hexagonal in shape: this is to suit the modular design of the network in case of expansion of the service area requiring more cells to be added. However in reality, the shape, size, location, etc., are dependent on other parameters like radio pattern of the transreceiver, signal strength, topography, and radio interference due to multipath reflection by objects. The mobile switching centre (MSC) links the cells together through a computer. Microwave or digital land-links connect the cells to the MSC which is also linked to the public telephone network.
A call placed on a cellular phone is directed to the nearest cell site from where it is directed to the MSC which, in turn, signals all cells sites to locate the person called. On locating the person, the MSC allocates a voice channel to connect the two users.
Pocket-size mobile telephone sets are now available because of miniaturization and high-power components. Cellular technology allows the same frequencies to be reused in more than one cell by skillfully manipulating the location and size of the cells. The cellular technology started as analog design but gradually, digital transmission has increased the utilization potential of frequency spectrum up to seven times than that allowed by analog transmission.
There are several technologies available for mobile telephones: TACS (Total Access Communication System), CT-2, NMT (Nordic Mobile Telephone) AMPS (Advance Mobile Phone System), and PCN (Personal Communication Network). The GSM (Global System for Mobile Communications) technology has been chosen by India. It is digital system operating in more than 70 countries, and has the facility of ‘international roaming’, i.e. automatically locating users wherever they are within the GSM network on being called. GSM works in the 900 MHz band with frequency separation of 200 KHz. Time divisions multiple access (TDMA) is the base of transmission. It enables various users to share a transmission channel as equal amount of different time slots can be allotted periodically and sequentially. The GSM technology allows data service and protection against eavesdropping, besides future expansion of the network.
India is divided into 20 circles service areas and four metro cities for the cellular mobile telephone service.
Global System of Mobile (GSM) The Global System for Mobile (GSM) is a worldwide dominant system that originally evolved as a pan-European digital standard, and built a base in the US and Canada at a rapid case.
GSM uses Time Division Multiple Access (TDMA). TDMA is a not a spread spectrum technology. It uses a narrow band that is 30 kHz wide and 6.7 milliseconds long. This is split time-wise into slots. Each conversation gets the radio for part of the time. TDMA allows eight simultaneous calls on the same radio frequency.
GSM larger user base is its biggest advantage. This, and the roaming facility it allows, gives it an edge. Use of SIM cards is its other advantage. As in January 2003, GSM had about 60 million subscribers and a geographical reach of up to 97 per cent of the world population.
In CDMA, the data is digitized and spread over the entire available bandwidth, unlike the narrow band of TDMA. Multiple calls are overlaid on each other on the channel, with each assigned a unique sequence code. The data is then reassembled at the receiver’s end. The battery life of CDMA handsets is longer than that of analogies phones, with a talk time of three to four hours and up to two-and-a-half weeks of standby time.
The disadvantage with CDMA handsets is that at the moment, these phones do not have a SIM card and are unique to the network they are initiated on. Limited roaming service is its other drawback.
Vital Differences
Characteristics
CDMA
GSM
Capacity
May increase but only when a sufficient percentage of compatible handsets are in use
Reduces capacity by using voice slots
Use
Voice or data
Data only
Scope of upgrade hardware
Radio cards and data routing cards, voice-switching hardware, data routine hardware.
Equipment frames, radio
Observed data speeds (unconfirmed)
40-60 kbps
20-40 kbps
Claimed maximum data speeds
144 kbps
115 kbps
Handset compatibility
All existing ones can access voice systems
New terminals required to access the voice system
Note: kbps stands for kilobytes per second.
By: Abhipedia ProfileResourcesReport error
Access to prime resources
New Courses