send mail to support@abhimanu.com mentioning your email id and mobileno registered with us! if details not recieved
Resend Opt after 60 Sec.
By Loging in you agree to Terms of Services and Privacy Policy
Claim your free MCQ
Please specify
Sorry for the inconvenience but we’re performing some maintenance at the moment. Website can be slow during this phase..
Please verify your mobile number
Login not allowed, Please logout from existing browser
Please update your name
Subscribe to Notifications
Stay updated with the latest Current affairs and other important updates regarding video Lectures, Test Schedules, live sessions etc..
Your Free user account at abhipedia has been created.
Remember, success is a journey, not a destination. Stay motivated and keep moving forward!
Refer & Earn
Enquire Now
My Abhipedia Earning
Kindly Login to view your earning
Support
Matter
As anything that exists in nature whether living or non-living, large or small, is formed of matter and the matter is anything that occupies space, possesses mass and can be perceived by our visual or other senses. In terms of its structural organization, we also know that matter on the earth exists at three grading levels respectively called as sub-atomic, atomic and molecular in ascending order. The lowest sub-atomic level is the ultimate particulate unit of the matter below which level, the matter does not exist on the earth neither that the independent subatomic units of the matter exist as such on the earth. Although, an atom by nature is an electrically neutral species, but it lacks an independent existence on earth. Therefore, the present matter of the earth invariably has atoms found either in a group of two or more after remaining bounded through specialized chemical bonds formed between the bonded atoms inter-se either by mutual sharing or donation or transferring of electrons by the one to the other atom that all depends upon the number of the electrons present in a particular joining or combining atom in its outermost shell and thus, have formed the 3rd level of the matter called as molecules.
Kinds of the matter: As what we have seen above that the chemical combination of different atoms either of the same kind or different results into the formation of molecules. Thus, on the basis of this chemical combination or organization, the matter on the earth predominantly exists in 3 forms or categories respectively called as elements, compounds and mixtures.
While an element is formed by the combination of similar atoms and each element is noted for having its own specific properties of colour, taste or chemical reactivity.
A compound on the other hand, is formed by the combination of two or more, but always different types of atoms. If the said two or more different atoms are combined together through covalent bonds, the resulting compound is called as molecular and if the joined atoms are joined through electrovalent or ionic bonds, the said compound is described as the ionic compound as such. For example, in this context, water (H2O) is an example of a molecular compound and NaCl of an ionic compound.
A mixture as a form of the matter has its basis in the fact that on the present earth there is hardly anything that can be called as a pure compound such that most of the earth’s matter exists as a mixture either of two or more than two compounds. In terms of their physical state, the mixtures may also be found as solids, liquids or gases for that matter. Of these mixtures however, it is the fluid mixtures that are of common sight around us in which one part of the liquid acts as the solvent that generally is larger in concentration and the other component dissolved in the solvent that may be a solid or liquid and is lesser in concentration as compared to the solvent is called as a solute.
ALL fluid mixtures may exist in one of the following three forms:
True solutions: A true solution also called as a crystalloid solution is generally homogenous in nature in the sense that there is a uniform concentration of solute particles at all levels of this solution. Its homogenous nature is also ascribed to the fact that its solute particles smaller than 0.001 microns in size and thus, are invisible even under the microscope. Solutions of the nature of salt or sugar are some of the best examples of the so called true solutions.
Suspensions: In a suspension, the solute particles are in the nature of molecular aggregates such that they are somewhere larger than 0.1 micron in size and are thus, visible even by naked eyes. Say for example, by mixing up flour or clay particles in water, the kind of fluid mixture, we may obtain is called a suspension.
Emulsion: A suspension in which both solvent and solute are liquids is called as an emulsion. For example, butter is an emulsion of “ghee” in water.
Colloids: A colloid mixture is the one that lies somewhere in-between the true solution and a suspension. Its solute particles or molecules are characterized for being larger in size so called as macromolecules that measure 0.001 micron to 0.1 micron in size or diameter. Owing to this much of the size of solute molecules, they remain non-diffusible and can be visible under an ultra microscope only. Colloidal mixtures are also heterogeneous in nature and are represented by the substances or mixtures such as milk, gum, gelatin or egg albumen etc. These colloidal mixtures are also noted for coagulation upon heating. The table drawn below shows the size of solute particles of the different mixtures:
Type of fluid mixtures
Size of solute particles
True solutions/Crystalloid solutions
Less than 0.001 micron
Colloids/Colloidal solutions
Between 0.001 micron to 0.1 micron
Suspensions
More than 0.1 micron
The Energy
All forces that exist and operate in nature and cause many natural events to occur on the earth: usually manifesting in the form of forces such as magnetism, gravitation, sound, heat, electricity and light etc is nothing, but different forms of energy which is the outcome of many changes or transformations that the matter undergoes on Earth. Thus, it can simply be said that the matter is always associated with energy.
Infact, contrary to matter, the energy neither occupies any space nor does it possess any mass. This is the reason that the energy is always described and defined but not in physical terms, but on operational or functional basis only. In this sense, the simplest definition of energy that emerges out is:
“It is the force that brings about a change or motion in matter.” or in other words, “It is the capacity of a material body to do some work or to produce an effect.”
Taking a cue from the above definition of energy, it can be understood that on a primary scale, the energy occurs in two forms or phases called as potential and kinetic energy.
While Potential energy refers to the stored energy of the matter or the energy of position of a material body. The Kinetic energy on the other hand, is the energy of motion or action possessed by a material body. In other words, kinetic energy is the active energy that is used in the process of producing an effect in the matter and hence is used for doing some work. To illustrate, we can take the example of a footballer. The potential energy stored in his leg remains as such until he kicks or hits the ball. But as soon as he hits or strikes the ball with his leg, the potential energy having been stored in his leg gets converted into the kinetic energy thereby, giving motion to the football. In the same vein, to take another example, we can say that a wood log or boulder lying at rest has the potential energy stored in it. Once, it is taken uphill, the kinetic energy involved or spent on carrying it uphill gets stored in it as its potential energy of position or rest. If, now, the same wood log or boulder is being pushed down hill, the so called its potential energy of position gets converted into the kinetic energy that lends downhill motion to the same.
Given thus, it can fairly be concluded that just as the potential energy got converted into the kinetic and vice-versa in the above examples similarly, any form of energy can be converted from its one form to the other and this is true to both living and non-living matter on this earth. This is the reason that we have different manifestations of energy on this earth that may be referred to as heat, sound, light, magnetic and nuclear energy etc. This is what that is the essence of first law of thermodynamics. Yet, it must be noted that during each of such energy transformations there is always a loss of some amount of energy in the form of heat such that no system whether living or non-living has ever been created or found on earth that is 100% efficient and thus, is the essence of second law of thermodynamics. The scientific truth is that the dissipation or loss of energy as heat into the environment is the ultimate fate of all kinetic energy on this planet…
Both laws of thermodynamics and energy transformations from one type to another can best be explained by the following live example:
As we know that a steam locomotive runs the train by using coal as its fuel. Now, the coal that stores its energy as its potential chemical energy which upon burning, gets converted into heat energy. A part of this heat energy dissipates into the environment and is thus, lost. But, the rest of the heat energy is used for boiling the water to produce the steam. The steam stores its energy as potential heat energy which gets converted into the mechanical kinetic energy that then gives motion to the rail locomotive. A part of this mechanical energy however, is also fed into a dynamo to be got converted into an electrical energy that is eventually converted into so called, light energy to light on the bulbs and lamps or for that matter, mechanical energy to run the fans within the train compartments.
In the simplest terms, it can be said that the electrons in an atom represent the matter of the above complex and their revolution around the nucleus in discrete orbits what we call as electron shells, represent the energy reservoir of an atom or matter. Noted that these electrons shells essentially represent the different levels of potential energy owing to the fact of their being manifested in the form of an electric force or charge that actually keeps the negatively charged electrons being bound tightly to the positively charged nucleus. Thus, to conclude that, the electrons, protons & neutrons together, represent the lowest units of the matter (particulate matter) whereas, the discrete electron shells being non particulate in nature, essentially represent the units of energy and this is how a typical matter- energy complex operates and has been operating in nature ever since the evolution of earth and life have occurred on it.
To have a fleeting reference to the respective states of the matter particularly, in terms of their behaviour, we can say that a gas does not possess an internal boundary and it expands to fill any container completely irrespective of the size or shape of the given container. A liquid on the other hand, possesses one internal boundary what we call as its surface. Therefore, it goes on filling its container below its surface regardless of the shape and size of the given container. In the same vein, a solid being rigid in nature is bounded internally in all directions and dimensions and thus, needs no external container. Now so far as the compressibility of the each matter state is concerned, we can say that a gas is much easier to compress than a liquid which infact, shows its compressibility somewhere in between the gases at one extreme and the solids at the other which means that the liquids are compressible only to a certain extent. However, the solids completely defy the behaviour of their being compressed anymore.
It is rather interesting to note that these discerning properties of the matter have what made them to be the substances of great significance for the mankind and this is where essentially the starting point of our topic- “everyday Physics” can be taken to have actually begun: In this regard, let’s first start with:
As discussed earlier, one of the outstanding properties of the gases is their being compressible in nature. This property of the gases has been explained through the ‘kinetic molecular theory’ of the gases according to which the gas molecules are always in random motion either in a container in which the gas is contained or in open. This random motion of the gas molecules has been possibled due to a large relative or inter-se distance between the gas molecules that permits a large empty space between them such that they always remain in a haphazard random motion. This exactly explains the ease with which the gases can be compressed. The gas molecules being in constant rapid motion move in straight lines until they collide with other molecules of the gas or with that of the walls of the container. This very nature of the gas explains the filling of the containers by gases as well as their mixing up with other gases. Similarly, once the moving gas molecules collide with the walls of the container, they become responsible for causing the pressure of that very container. Infact, the given pressure of a gas or the container as such is actually the result of the number of such collisions happening per unit time and accordingly more the number of such collisions per unit time, more shall be the pressure of the gas which can obviously be increased by forcing more gas into the container or otherwise, decreasing the volume of the gas.
Everyday physics involved in the ‘gaseous’ state of the matter: As we already know that the gas molecules are always in rapid and random motion due to relatively a large distance between the gas molecules. This property of the gases makes them to exhibit a unique physical phenomenon what we call as diffusion.
It is this property of the gases that an incense stick (Agarbatti) or the odour of a perfume kept in one corner of the room fills the entire room with the fragrance or odour in an instant. The diffusion rate of a gas chiefly depends on its molecular weight such that lighter the gas, more rapid shall be its rate of diffusion. One English scientist, named Graham very nicely explained the rate of diffusion of different gases by propounding a law which is today known as Graham’s law of diffusion according to which: “the rates of diffusion of gases are inversely proportional to the square roots of their densities when the respective gases are at the same temperature and pressure.” This law was put to a practical use for the first time while preparing the first atomic bomb. In the said task, the naturally occurring Uranium called Uranium-238 was combined with fluorine gas to form the Uranium hexafluoride gas (UF6). When this gas was passed through a porous membrane such as unglazed porcelain or natural rubber, the molecules of the gas containing the lighter weight uranium isotope called uranium-235 diffused slightly more rapidly through the porous membrane to the other side as compared to the molecules of the gas containing the heavier uranium isotope called U-238. This way, by doing repeated diffusions, the usable fissionable material that is the actual bomb material even now called as U-235 was separated from the useless U-238. This process of obtaining or extracting the useful U-235 from that of the naturally occurring U-238 through a diffusion process that is even well practiced today is called as the enrichment of the uranium and the uranium thus obtained is called as enriched uranium.
Changing gas to liquid and your common cooking gas, LPG is obtained: If the volume of a gas is sufficiently reduced by compressing it or by cooling it or for that matter, by the application of both, the gas will ultimately be condensed into a liquid form. The Physics behind this all is the fact that decreasing the volume of a gas either by compression or otherwise will have the effect of decreasing the average inter-se distances between the gas molecules such that the number of collisions per unit time will be increased thereby they end up loosing their kinetic energy that is the energy of their motion and hence, condense into a liquid form. Similar is the effect of cooling a gas as it also amounts to decreasing their kinetic energy and thus, making a gas to liquefy. However, we have a number of substances existing in the gaseous state at room temperature that can be condensed to a liquid state by the application of pressure alone. But there do exists certain gases that resist liquefaction regardless of the pressure imposed thereon and hence, liquefy only after being subjected to cooling or low temperature treatment. This can be explained by the fact that a critical temperature is always involved in the liquefaction process of a gas. And a critical temperature of a gaseous substance is the temperature above which it is impossible to liquefy the substance by pressure alone and the pressure required to liquefy a gas at its critical temperature is called the critical pressure. The more a gas is cooled below its critical temperature, the less pressure will be required to liquefy it. In fact, any substance existing in a gaseous state at a temperature above its critical temperature is what exactly & properly called a gas. For the purposes of our common household cooking gas, LPG which is essentially a hydrocarbon of a saturated type called as butane exists as a gas in its physical state and has been liquefied by the application of pressure and temperature both and thus carries a weight and stored in a container called a cylinder. So is true of another gas liquefied in a similar manner called as CNG that is being used as a clean automobile fuel in most of our metros.
Liquid hydrogen as fuel & ‘Hydrogen Economy’: It is no surprise that on a mass for mass basis, hydrogen as fuel can release far more energy than petrol (about three times) and that too without emanating any pollutants into the air during combustion. Yet, the only stumbling block on the way of its being used as a universal clean fuel are the troubles associated with its storage. As a cylinder of compressed hydrogen weighs about 30 times as much as a tank of petrol containing the same amount of energy. As the hydrogen as a gas can be converted into its liquid state by cooling it to 20K that would require highly expensive insulated tanks for its storage. Such tanks are being made from expensive metal alloys like that of NaNi5, Ti-TiH2 or Mg-MgH2 etc and are practically in use currently for storage of hydrogen in small quantities. Owing to such handicaps and limitations associated with hydrogen as a fuel, has eventually prompted the researchers to look out for some alternative techniques that would ensure its use as a fuel in a far more efficient manner and hence, the alternative of what we call as ‘Hydrogen Economy’ came to the fore. Hydrogen economy literally refers to the use of liquid hydrogen as fuel. In essence, the basic principle of hydrogen economy is the storage and transportation of this form of energy as liquid or gaseous dihydrogen and this is where that the advantage of its being used as a fuel lies in. As the hydrogen economy will ensure that the energy is transmitted in the form of hydrogen rather than as electric power. Keeping this advantage in view that it is for the first time in the history of India, a pilot project using hydrogen as fuel was launched in October 2005 for running automobiles. In this case, initially 5% hydrogen has been mixed in CNG for being used as a fuel in four-wheeler vehicles and this %age would later be increased gradually to reach the optimum level. Hydrogen nowadays is also being used in ‘fuel cells’ for the purposes of generation of electricity. Lets hope that in the years to come, some more economically viable and safe sources of hydrogen would be made available to make its use as a common & universal source of energy…
Working on Gases/ other material at extremely low temperatures: The Science of “CRYOGENICS”: Cryogenics, the word is derived from cryo= cold and genics= to produce, is the new emerging science of working on gases (matter) at extremely low temperatures at which gases are reduced to a liquid state and other materials change their inherent properties. Cryogenics has immense applications in a host of diverse fields right from rocket/ space technology to medicine including preservation & many others as are being mentioned in the table given herein below:
Deep Cryogenics: It is the ultra low temperature processing of materials to enhance their desired metallurgical and structural properties. The hardness of the material treated is unaffected, while its strength is increased. The materials so treated greatly increases the strength and wear life of all types of vehicle components, castings and cutting tools. In addition, other benefits include reduced maintenance, repairs and replacement of tools and components, reduced vibrations, rapid and more uniform heat dissipation, and improved conductivity.
Cryobiology: The world cryobiology (Greek cryo, “cold”, bios, “life” and logos, “science”) literally signifies the science of life at low temperatures. In practice, this field comprises the study of any biological material or system (e.g. proteins, cells, tissues, organs, or organisms) subjected to any temperature below normal (ranging from moderately hypothermic conditions to cryogenic temperatures).
Cryocooler: A mechanism that can extract heat from an object (cooler) and by doing so draw its temperature down below approximately 150 Kelvin (cryo). Some areas of applications for cryocoolers are: (i) Military – Infrared sensors for missile guidance and tactical applications; infrared sensors for surveillance (satellite based). (ii) Police and security- Infrared sensors for night vision and rescue. (iii) Environmental – Infrared sensors for atmospheric studies of ozone hole and greenhouse effects; infrared sensors for pollution monitoring; (iv) Commercial – Cryopumps for semiconductor fabrication; high temperature superconductors for cellular-phone base stations; superconductors for voltage standards; semiconductors for high speed computers; infrared sensors for NDE and process monitoring. (v) Medical – Cooling SC magnets for MRI systems; Squid magnetometers for heart and brain studies; liquefaction of oxygen for storage at hospitals and home use; cryogenic catheters and cryosurgery; (vi) Transportation – LNG for fleet vehicles; SC power applications (motors, transformers, etc.) (viii) Agriculture and biology – Storage of biological cells especially, pollen grains & seeds and specimens.
Cyro-insulation of Superinsulation: The term “superinsulation” has a number of different meanings to people in different technical areas. To the cryogenic engineer superinsulation typically means many layers of alternating reflective films and low-conductivity spacers, or multilayer insulation (MLI). Vacuum insulation panels for appliances are sometimes referred to as superinsulation.
Space Cryogenics: Space cryogenics is the application of cryogenics to space missions. Many of these missions use infrared, gamma ray, and X-ray detectors that operate at cryogenic temperatures. The detectors are cooled to increase their sensitivity. Astronomy missions often use cryogenic telescopes to reduce the thermal emissions of the telescope, permitting very faint objects to be seen. A broad range of cryogenic technology is needed to support these missions. Another area of space science which makes use of cryogenics is sample preservation. This includes the preservation of biological samples from experiments, on the Shuttle and the Station and the preservation of material gathered from comets, asteroids and other planets.
Cryosurgery: Cryosurgery is an important minimally invasive surgical technique. It can be applied to any procedure in which scalpels are used to remove undesirable tissues.
Wind Tunnels: As used in cryogenic wind tunnels, cryogenic technology is making a major contribution to experimental aerodynamics.
Liquid Carbon dioxide converted to solid CO2 & formation of ‘Dry Ice’: Once the liquefied CO2 is allowed to expand rapidly, the same can be made to solidify as a solid to what we call as dry ice. Dry ice is used as a refrigerant in refrigeration especially for ice cream and frozen food. However, in the filmdom, extensive use of dry ice is made for the construction of film sets portraying a snow clad mountainous panorama.
A liquid is always noted for having a fixed volume, but no fixed shape. This is owing to the property of its molecules being not located at fixed positions and are thus free to wander about, but within the body of the liquid only for, they can not escape out unless, they are at the surface of the liquid.
In terms of the kinetic theory model as is applicable to the gaseous state also, a liquid and its state of the matter is shown to possess the following fundamental characteristics:
To surmise the above characteristics of the liquid state of the matter, it can be inferred that the liquids, unlike the gases are known to possess a definite a volume such that they maintain their volume regardless of the shape or size of the container. In liquids, since the molecules are close together so that the mutual forces of attraction are quite strong in them that do not allow its molecules to remain too free to occupy any space. The inter-se closeness of the liquid molecules also provides an explanation to the fact that the density of a liquid is about thousand times greater than that of the density of a gas under comparable conditions. So far as the compressibility of the liquids is concerned, it is also very much less as compared to the gases just because, a very little space is freely available to the liquid molecules so as to become amenable to a force of compression. Although, like the gases, the liquids also exhibit the phenomenon of diffusion, but they diffuse rather very slowly as compared to the gases. This is attributed to the inherent nature of the liquids as the liquid molecules have less inter-se space and are thus, subjected to a number of collisions between them that amount to having a retarding effect on their movement and hence their diffusion.
Everyday Physics involved in the ‘liquid’ state of the matter:
According to the Kinetic Molecular Theory, the molecules of a liquid are free to move about under the surface of the liquid under the influence of kinetic energy which they possess( kinetic energy) at any given temperature. The amount of this energy possessed by each molecule of the liquid is not uniform however. In general, most of the molecules of a liquid have about the same amount of energy, but some possess appreciably more than average energy and some possess appreciably less. Furthermore, because the molecules in a liquid are so close to one another that they end up colliding frequently with each other such that during each of such collisions energy is redistributed between the colliding molecules thereby, causing a gain in energy in one of the molecules and a loss of energy in the other. This transfer of energy by collision can result in the formation of relatively high-energy molecules. Now the temperature is a measure of the average energy of a sample of liquid. If the temperature goes up, the average energy of the liquid goes up. Similarly, a decrease in average energy results in a lowering of the temperature of the liquid sample.
Since the escape of a liquid molecule in the evaporation process requires energy. Consequently, only those molecules of high energy can evaporate from a liquid. But the escape of these high-energy molecules results consequently in the lowering or decrease of the average energy of the liquid sample. Thus is the rationale that a cooling effect always accompanies evaporation and hence, we cool our body as a consequence of sweating during sweltering and scorching summers…
Interestingly of course then, as soon as the temperature of the liquid begins to drop below the temperature of the surrounding atmosphere, the atmosphere begins to warm the sample and thus, add up more energy to the liquid to replace the energy lost by evaporation. This additional energy, plus the accumulation of it by relatively a few of the liquid molecules through molecular collisions, permits the evaporation of the liquid to continue until ultimately all of the molecules of the liquid have gained enough energy to evaporate and hence, evaporation always remains to be a continuous and spontaneous phenomenon.
Biologists ascribe this all to ‘Transpiration pull’, also referred to as the ‘Cohesive force of water theory’. This theory was proposed in the year 1894 by two scientists named Dixon & Jolly and is based on the following two phenomena of Physics and thus, has essentially the science of Physics behind:
No.1) Transpiration pull exerted on the water column: As water is lost due to evaporation to the intercellular spaces from the "mesophyll cells” of the leaves as a result of transpiration, the same water as vapors is then lost to the outside from the leaves through their stomata. The said vaporized loss of water from the leave’s mesophyll cells causes an increase of DPD. The said increased DPD ultimately causes the mesophyll cells to suck in more water from the adjoining cells and the same is absorbed by them from the xylem vessels of the leaf that are always remain filled with a continuous column of water. As the water is being withdrawn from them, a tension or pull which in Physics resembles the surface tension and herein the context is called as the transpiration pull, develops at the top of the said water column and soon is transmitted down from the petiole of the leaves to the stem and finally terminates at the roots thereby leading to an upward movement of the water…
No.2) Cohesion property of the water molecules so as to form an unbroken water column in the xylem: As xylem tracheids and trachea are long tubular structures extending right from roots up to the leaves such that one end of xylem (continuous with another) is in the root and the other end is in the leaves. These tubes are always remain filled with water wherein the water molecules are tightly held together because of a cohesive force between them and are thus responsible for forming a continuous water column. Whereas, the presence of hydrogen bonds among water molecules provide a strong cohesion that thus holds together a chain of water that in fact extends throughout the entire length or height of the plant within the xylem. It is no surprise to hold that the said water column within the xylem as a result of cohesive forces is as much strong and unbreakable as a steel wire of the same diameter and this is what that is ensured by the cohesion part of the story of the Physics. However, supplementing the cohesion property of the water molecules here is the adhesion property of the force that develops between the water molecules and the walls of the xylem vessels. The attraction of the water molecules to the cell walls of the thin xylem tubes helps the water to creep upwards besides preventing the force of gravity from draining the water column out of the xylem vessels.
Thus, to conclude that the water ascends to the top of the tree because of transpiration pull and this column of water inside the xylem tubes remains continuous because of the cohesive forces of the water molecules….
Surprisingly enough, we can float razor blades or needles on a water surface even though; these objects are far denser & heavier than water just because, the tough surface film of a liquid that acts like a typical leather membrane due to the phenomenon of surface tension, supports them on!
Since, the surface tension of a liquid may be changed or better said, reduced by dissolving a substance in the same and so does the effect of soap on water. Say for example, when the soap or a detergent is dissolved in water, it greatly reduces the surface tension of water by causing a decrease in the horizontal forces of attraction that the original water surface had, but by the dissolution of the soap. The result being the formation of net unbalanced forces on the water surface that remain directed away from the point of application of the soap and hence, we are able to wash up our clothes….
Adhesion and Cohesion property of water & maintenance of a water tight column from roots to the last tip of a plant: Surface tension is related to another phenomenon associated with liquids. Between the molecules of a liquid there are attractive forces which may be called forces of cohesion. Similarly, there are attractive forces between the molecules of a liquid and the molecules of its container. These are being referred to as the forces of adhesion. If the forces of adhesion are greater than the forces of cohesion, the liquid will be drawn up into a small-diameter tube made of the substance. If the liquid does not wet a substance, a small-diameter-tube made of the substance will depress the surface of the liquid.
Capillarity & its Physics in live action: The rise or depression of a liquid surface in a small-diameter tube is known as capillary action. The amount of change in the level of the liquid in the tube is directly proportional to the surface tension of the liquid.
Capillary action and Everyday Physics: The sub-surface water in fields is carried up to the roots of plants through tiny pores (capillary tubes) in the soil. A sponge ‘drinks’ water into its capillary tubes. Oil rises up a lamp wick, blotting paper blots or sucks ink, a towel gets wet even if only a corner of it dips in water and so on are the live examples of what we call as capillarity…
Viscosity & the flow of liquids: Why do liquids like honey or seed oils flow slower than liquids like water or milk? Contrary to solids, liquids flow when a stress is applied. This flow results because, intermolecular forces what we also call as attractive intermolecular forces in liquids are largely incompressible. Some liquids like castor oil and honey flow slowly while other, such as kerosene, flow rapidly. These differences in flow rates result from a property known as viscosity. Viscosity in essence, is the resistance offered by a liquid to flow. It may be called as fluid friction. Stronger the intermolecular forces, higher is the viscosity. When the temperature is raised, the viscosity of liquids decreases. This is because; increase in temperature increases the average kinetic energy of molecules which overcomes the attractive forces between them. Heavier and large molecules flow less easily than the lighter and smaller molecules; spherical molecules offer less resistance to flow than plate-like molecules. Molecules with flexible chains offer a very high resistance to flow because of entangling of side chains. Impurities invariably increase the viscosity of a liquid. As noted above, since the substances like honey and oils are comparatively made up of larger and heavier molecules of sugar (fructose) and fatty acids respectively and thus, are noted for having a comparatively stronger intermolecular forces and thus, flow slower than water or any other similar liquid…
Connecting concept: Why does ice float on water? When liquids are sufficiency cooled they will congeal to a solid state. The temperature at which a substance solidifies from the liquid state is known as the freezing point of that very substance. As in the case of the boiling point, the atmospheric pressure also affects the freezing of a liquid which expands on cooling. So far as the formation of ice from water is concerned, there is a very unusual property of water than comes into play. This lies in the fact that water always contracts and hence, becomes denser when cooled to 4 degree centigrade. But, if it is subjected to cooling temperature less than 4 degrees, it expands and become less dense. It is said that when water is being cooled to below 4 degrees, it exhibits an unusual property of expansion and thus, becomes less dense that amounts to its being lighter. Thus, it is concluded that ice obviously being less dense as compared to the liquid water below it and hence, it floats. Probably, water is one of the few substances that do not shrink when changed from liquid to solid rather, it expands so much so that when water is frozen into ice, it expands by one-ninth meaning thereby that nine litres of water will give us about ten litres of solid ice! Taking a cue from this all, we may conclude that why do automobile radiators or water pipes burst or crack in chilling winters? Because, the freezing water does not provide any room to the expanding ice thus formed!
Solids are characterized by their high density and low compressibility compared with those of the gas phase. The values of these properties for solids indicate that the molecules (or ions) in them are relatively close together. Solids can very easily be distinguished from liquids by their definite shape, considerable mechanical strength and rigidity. These properties are due to the existence of very strong forces of attraction amongst the molecules (or ions) of the solids. It is because of these strong forces that the structural units (atoms, ions, etc.) of the solid do not possess any translatory motion, but have only vibrational motion about their mean positions. Solids maintain definite volumes independent of the size or shape of the container in which they are placed. Solids diffuse very slowly compared to liquids or gases. Particles constituting solids occupy fixed positions.
Classification of Solids: Different structural features of solids can form the basis for classifying them. They may be roughly divided into two classes: true solids and pseudo solids. A distinctive feature of solids is that they are rigid. A true solid has a shape which it holds against mile distorting forces. A pseudo solid lacks this character. It can be more easily distorted by bending and compressing forces. It may tend to flow slowly even under its own weight and lose shape. Pitch and glass are two examples of pseudo solids.
Connecting concepts: What’s Glass? In general terms, the Glass represents an example of a pseudo-solid that means its solid character, rigidity or shape is only apparent otherwise, in actuality, it is as good as a liquid which tend to flow under its own weight and thus, can easily change its shape. At the same time, a glass entirely lacks the character of a true solid like that of rigidity and being not amenable to distorting or bending forces. This is the reason that a Glass is described as a pseudo or false solid. For example, you might have observed that in old buildings, window panes (glasses) are found to have become somewhat thicker at the bottom and thinner near the top that indicates the fluid nature of a solid like Glass. In essence, such substances like Glass are better described as “super cooled liquids”. Infact, pseudo solids do not melt sharply on being heated instead, they gradually soften over a wide range of temperatures and eventually, lapse into a liquid state. At the same time, it must be noted that a solid like Glass do not have a definite & defined melting point coupled with its lacking a definite geometrical pattern due to a random arrangement of its constituent particles, it is also referred to as a vitreous or an amorphous solid…
Solids may either exit in ‘shapeless amorphous’ form or in a ‘well-shaped crystalline’ form. A brief description of the same is being given herein below:
Crystalline solids are characterized by the regular arrangement of atoms, ions or molecules in all the three dimensions. This regular arrangement gives rise to long range order in crystals. The definite pattern constantly repeating in space is such that having observed it in some small region of the crystal; it is possible to predict accurately the position of particles in any region of the crystal, however far it may be from the region under observation. The constituent particles in crystals are generally held by strong inter-atomic, inter-ionic or inter-molecular forces. When we try to cut a crystalline solid with a sharp-edged tool it gives a clean cleavage. Crystalline substances have a definite rigid shape or morphology. Every crystal is contained within a well-defined set of surfaces which are called planes, faces, intersect an edge is formed. When heated, the crystalline solid melts at a specific temperature called the melting point of the solid.
Connecting concepts: ‘Quartz’- the most widely distributed & useful of crystalline solids: Another name for quartz is silica as it is made up of silicon and oxygen and by nature, is harder than steel and clearer than glass. As a mineral, it is generally found in large, clear six-sided crystals with pyramid like ends what are called as “rock crystals.” Infact, some of the most abundant rocks are composed largely of quartz. As such, the sandstone consists of nothing, but composed of the grains of quartz held together by a cementing substance. Quartz otherwise, forms a large part of the rock what we call as granite. However, white sand is almost pure quartz whereas; all sands are largely made up of quartz only. Semi-precious stones such as Agate, Amethyst and Onyx are nothing but quartz. In terms of its use, Quartz finds its extensive applications in the manufacture of a number of optical instruments and glass. Notably, the thin slices of pure quartz are being cut for being used in radio-broadcasting so as to ensure the radio stations being on their proper wavelengths. Sometimes, special quartz lamps are being used to give artificial sun treatments.
Amorphous solids include other substances like fused silica, rubber and polymers of high molecular masses. They may even have small parts in crystalline and the rest in non-crystalline form. Crystalline parts of otherwise amorphous substances are called crystallites.
Properties of Solids: The special use to which we put different kinds of matter is dependent, in the last analysis, on the forces between molecules of a particular matter substance. Say for example, the forces of cohesion that show up when the distance between molecules is very small and that are especially strong in solids. These cohesive forces in solids are thus, responsible for many of the useful properties of solid materials that we make use of both in industry and everyday life.
Why do we need to weld the solids, like metals? One of the most obvious attributes of a solid matter is its resistance to being pulled apart. This is what we call as ‘tenacity’ or tensile strength of a solid. It takes a force of over 200 tons to pull apart a good quality steel rod of just 1 square inch in cross-section. This is what makes and has made the steel so useful in structural engineering since ages. If the two pieces of a broken specimen, say steel or an iron rod are pressed together again they no longer stick together, because we cannot get the molecules on both sides of the break close enough together to make their cohesive forces effective enough so as to make them realign into a complete rod again. However, by heating the pieces and pounding them together they can be welded into the one. Infact, we do have some solids that have a very low tensile strength, but have a strong resistance to crushing and stone falls in this category of solids. This is what makes the stone a very useful material solid for building arches and piers, where it bears compressive stresses only.
Another widely used property of solids is their elasticity. It is the ability of a substance to recover to its original shape and size after distortion or as soon as a distorting force is removed thereof. If a strip of steel or bronze is given a moderate twist or bend or stretch, it returns to its former/original shape after the distorting force has been removed. This property makes such metals as being very useful for making springs.
What is Hooke’s Law? When a solid body is stretched what we describe in Physics as “stress”, its molecules across any section will attract each other and exert a restoring force. Thus, a stress is defined as the restoring force per unit area. On the other hand, “Strain” means, distortion and is defined as the change in dimension per unit of original dimension. Robert Hooke (1635-1703) as a result of a number of experiments found that “within the limits of elasticity, strain is directly proportional to stress.” This is known as Hooke’s law. Thus, the elongation of a wire is directly proportional to the stretching force on it.
Grease or putty shows no tendency to recover to its shape after being deformed i.e. it completely lacks elasticity and such substances are thus, said to be highly plastic. But even steel, if stressed more than a certain amount, will fail to return completely to its original shape. This conveys the sense that all structural materials should never be required to work as far as these limits.
Although some substances can easily break if stretching (tensile) forces are applied on them. However, they can support large weights or forces of compression. Such substances are called brittle substances e.g. masonry, cast iron, etc.
Over and above this all, there are several other useful molecular properties of solids. Certain metals, such as gold, copper etc, are very malleable – that is, they can be pounded or rolled into very thin sheets. Gold can be beaten into sheets that can even be about 1/5000 inches thick. Other metals, such as copper, platinum and silver can be drawn out into very fine wires. They are said to be ductile. Wires less than one-hundredth of the thickness of a hair strand can be made out of platinum. Similarly, it is also not surprising to know that a mere 30 grams of silver can be drawn into a wire more than 30 miles long…
Connecting concepts: How hard a solid can be? The hardness of a material is measured by its ability to scratch other substances. Diamond is the hardest ever known substance on the Earth. The formation of diamonds on earth can be traced to the hoary past when the evolution of the earth took place. A hundred million years ago, the earth was in its early cooling stages. At that point of time, there existed beneath the ground a mass of hot liquid rock. The said mass that predominantly had carbon in it was subjected to extreme heat and pressure and hence, it was this carbon that became what we called “diamonds.” Undoubtedly, diamonds are the hardest natural substance ever known to man, but then the question arises as how do we measure the hardness of a particular substance? Well, one way of doing this all is by using the scratch test that involves the scratching of diamond with another hard substance.
In 1820, a man called Mohs made up a scale of hardness for minerals based on such a test. Based on his scale what is now known as Mohs scale, the hardness of various minerals was determined in the following order starting from the lowest to the highest: (1.) Talc. (2.) Gypsum. (3.) Calcite. (4.) Fluorite. (5.) Aphatite (6.) Feldspar. (7.) Quartz. (8.) Topaz. (9.) Corundum (10.) Diamond. On the Moh scale, it was determined that Corundum is at number 9 on the scale in terms of the hardness whereas, Diamond stands at number 10 on the same scale and hence, emerges as the champion for hardness with no competition from anywhere to assail its hardness. Now the second question arises that since diamonds are so hard, how can they be shaped and cut? Interestingly, the only thing that can cut a diamond is but, another diamond only. Infact, what diamond cutters use is a saw with an edge made of diamond dust only. In India, Surat in Gujarat is a world famous destination for diamond cutting. In the same context however, diamond grinding and cutting wheels are used in industry in many ways, such as to grind lenses, to shape all kinds of tools made of copper, brass and other metals and of course, to cut glass. Today, more than 80% of all diamonds produced are used in industry!
Plasma (Fourth State of Matter): It is a well known fact that matter be it solid, liquid or gas, is neutral in regard to electric charges. But strangely, most of the matter in the universe exists in a special kind of gaseous state in which the atoms are in fact split and the electrically charged particles are disengaged. Matter in this ionized condition is called plasma which is becoming increasingly important in electronic devices, in the search for thermonuclear power and as a source of electric power.
The branch of science dealing with the use of plasma to generate electricity is called magneto-hydro dynamics. This method is admirably suited to countries with plenty of coal supply and is, therefore, advocated by countries like Poland which consider the power produced thereby to be cheaper than atomic energy.
Our everyday experience of things tells us that matter on earth exists in three distinct forms – solid, liquid and gaseous like liquid water appearing sometime as solid ice and sometime as gaseous stream. But when we consider all matter in the universe as a whole, we find that it exists in several other states not ordinarily encountered on earth. Although, these states of extra-terrestrial matter are very varied, they may be broadly classified into four basic kinds other than three states that exist on Earth. They are plasma, degenerate, Neutron and black-hole states of matter.
“Plasma state” is the state of matter when its temperature is so high that it ceases to remain as a hot gas. Instead, the atoms of the hot gas are split apart into their constituent electrons and nuclei to make it a glowing plasma. Plasma, therefore, is a hot gas of electrically charged particles like free protons and electrons. This is the state of matter of which stars like our very own sun are made.
“Degenerate” state is the state of matter of which stellar corpses called white dwarfs are made of. White dwarf is the terminal stage in stellar evolution when a moderate sized star has completely burnt out its resources of nuclear fuel and can no more produce the energy with which it shines. Matter in “degenerate” state is so dense that a match box full of it weighs several tons.
“Neutron” state is the stuff of the debris that a massive star leaves behind when it explodes as a supernova. Neutron stars are even denser than white dwarfs.
Finally, “black-hole” state is that speculative condition of matter when millions of stars are packed within the eye of a needle! It is, therefore, so infinitely dense that its gravity traps even its own rays of light that what it emits is virtually invisible.
It is that “singular” state of matter where almost all its diverse attributes disappear not unlike the grin of the vanished Cheshire cat in Alice in Wonderland….
The three states of matter – solid, liquid and gas – we are all familiar with, is not the whole story. We know that each of three states gives rise to the next by heating like ice turning into water and water into steam. But physicists have now discovered that new states arise when temperatures are raised still further. At several thousand degrees gas will ionize into a plasma with all its atoms broken into their nuclei and satellite electrons. At several million degrees, the atomic nuclei in turn, are broken into individual protons and neutrons. At still higher temperatures even these nuclear particles fragment into “quarks” of which neutrons and protons are made.
At each successive phase transition, some structure is lost. Solid ice loses its crystalline structure on melting into water. Boiling water becomes a chaotic mélange of molecules, and with higher temperatures the molecules and eventually the atoms themselves fragment into sub-nuclear particles. Is there an ultimate limit to this continual fragmentation? Do the sub-nuclear particles like quarks and leptons fragment further to give rise to yet another state of matter?
Some theoretical physicists believe so. According to their speculation, the ultimate state of fragmented matter is that weird condition with which our universe originated with a big bang some 15-20 billion years ago. They believe that at the instant of the big bang the ambient temperature of the universe was more than 1030 degrees K. Under these Draconian conditions, matter appears in its primeval state of fragmentation smashed into its ultimate building blocks with the complete obliteration of the individual identities of even quarks and leptons.
All the manifold diversity of present-day matter was merged into a single ultimate state at the instant of creation. In exactly the same manner all this four fundamental forces of nature-gravitation, electromagnetism, weak and strong – that control the structure of our existing macro and micro world merged into a single unified force with physics reduced to its basic simplicity.
Alas! Theoretical physicists have yet to discover this ‘simple’ law of unified force. All that their theorizing has become able to do so far is this. First the temperature of the primeval fire ball plummeted from quasi-infinite values at the outset to a mere 1010 degrees K within one second! For about the brief span of 10-35 second the temperature could have been 1030 degrees K., high enough for the predicted ultimate phase of matter to have had a fleeting existence. Obviously, we will never be able to duplicate temperatures of this order to see what such an ultimate state of completely fragmented matter looks like…
A solution consists of two components – a solvent, which is the dissolving medium, and a solute, which is the substance dissolved in a solvent. Solutions are mixtures, because an infinite number of compositions involving a given solute and solvent are possible. In a solution, the solute is dispersed into molecules or ions and the distribution of the solute is perfectly homogenous throughout the solution.
A concentrated solution is one which contains a relatively large amount of solute per unit volume of solution. A dilute solution on the other hand, is the one which contains a relatively small amount of solute per unit volume of solution. The principal methods of expressing the concentration of solutions are molarity, normality, parts per million (ppm) etc. A standard solution is any solution of accurately known concentration.
Solutes may be divided into two classes: (i) Those which when dissolved in water produce a solution which conducts an electric current are called electrolytes; acids, bases and salts belong to this class. (ii) Those which when dissolved in water produce a solution not capable of conducting an electric current are called non-electrolytes.
This has something to do with an important phenomenon exhibited by the liquids what we call as vapour pressure. Since, the pure water or solvent always has a higher vapor pressure as compared to a solution owing to the fact that when a solute is dissolved in a solvent say, water, the given solute takes up a portion of the solvent’s surface where otherwise, in a pure solvent, only water molecules of high energy have occupied the surface space.
This consequently, has the effect of decreasing the number of molecules of solvent in contact with the air above it, and consequently, reduces the rate at which these molecules could escape out as vapours. The extent to which a given solute lower the vapour-pressure of their solvents depends upon their very concentration in the solution.
Now given the fact that a solution has a lower vapour pressure than the pure solvent, it explains another phenomenon associated with solutions, namely, that the boiling point of a solution is higher than the boiling point of the pure solvent and this is again attributed to the fact of their being a discrete difference in the vapor pressure of the two…
In the same vein, a solution freezes at a temperature far below the freezing point of its solvent. This phenomenon is likewise, related to vapour pressure too.
This difference in the vapor pressure has found its unique application in the operation of various types of drying agents we may use in our cupboards during moist seasons or climates.
If a dish containing pure water and one containing a solution are placed side by side inside a tightly covered container, each liquid will begin to emit vapour into the air inside the container such that the air will become saturated with vapours relative to the solution first. Thus, any additional vapour emitted by the water will condense back to liquid in the solution. Consequently, the air never gets a chance to become saturated with vapour relative to the water, with the result that all of the water eventually finds its way into the solution. This process is known as isothermal distillation. This phenomenon is the principle of operation of the various drying agents used in the cupboards of homes in moist climates. Solid chemicals, such as calcium chloride or potassium carbonate, which are very soluble in water, become moist in damp air. A saturated solution forms on their surface. The vapour pressure of this solution is less than the vapour pressure of the water in the moist air, and so more moisture is absorbed by the solution. Ultimately all of the solid dissolves in the moisture it absorbs, but even so, the solution continues to absorb moisture until the vapour pressure of the solution equals the pressure of the water in the atmosphere. This phenomenon is known as deliquescence and deliquescent chemicals used as drying agents are called desiccants.
You are, of course familiar with various applications of the lowering of freezing points. Alcohol or ethylene glycol (permanent anti-freeze) is added to the radiators of cars to lower the freezing point of the water. If a path is icy, salt is sprinkled on it. The salt dissolves in the little water on the surface of ice forming a salt solution, the freezing point of which is lower than that of water, the ice, consequently, melts to go into solution.
Colloidal particles were once thought to be non-crystalline and thus, they were given the name ‘colloid’ which means glue-like. However, colloidal particles may be either crystalline or non-crystalline in true sense of the term. When colloidal particles are dispersed in a gas, liquid, or a solid, the result is called a colloidal system. It is impossible to have colloidal particles of a gas dispersed in a gas. Gases disperse in gases in the form of small molecules producing a homogeneous mixture or a solution. It is not classed as a colloid which is heterogeneous.
Sol: Name given to a colloidal system in which a solid is dispersed in a liquid. The system shows all the characteristic of a colloid, and the particles of sol will not settle out.
Gel: In a gel, the liquid contains a colloidal solid evenly dispersed throughout the system but set in a structure which will not flow. Typical examples of a gel are jellies and gelatin.
Aerosol: An aerosol is a dispersion of either a solid or a liquid in a gas. When the dispersed colloidal particle is a solid, the result is smoke. The smoke from the factory may contain fine ashes and particles of unburnt fuel suspended in exhaust gases. When the colloidal particles dispersed in the gas are liquid, the resulting aerosol is a fog. When the dispersion media is water, colloids are known as hydrosols; when the dispersion medium is alcohol, the colloids are known as alcohols and in benzene they are known as benzosols.
Emulsion: An emulsion exists when colloidal particles of one liquid are dispersed throughout in another liquid in which it is not soluble.
The process of making an emulsion is termed emulsification. Emulsions are of two types, the oil-in-water type and water-in-oil type. The droplets in emulsion are some what larger than usual particles found in sols. Emulsions may be produced by vigorously agitating a mixture of the liquids, or better by subjecting the mixture to ultrasonic vibration. Emulsions are generally unstable unless a third stabilizing substances, known as emulsifying agent, is also present. In the absence of emulsifying agent, the dispersed droplets coalesce and eventually the emulsion breaks up into layers. The most frequently used emulsifying agents are soaps and detergents. Their emulsifying properties help in washing of clothes and crockery by emulsifying the grease and carrying it away in the water along with dirt. Some of the industrial uses of emulsions are in concentration of the ore, etc. They are also used in salad dressing. Milk is a natural emulsion of liquid fat in a watery liquid. Paint otherwise, is called a synthetic emulsion.
Macromolecular Colloids: In this type the dispersed particles are themselves large molecules. They have very high molecular masses. They are usually polymers, for example, synthetic rubber.
Micelle: There is another type of colloids which behave as normal, strong electrolytes at low concentration, but at higher concentrations exhibit colloidal state properties. There are known as micelles. Such substances are referred to as associated colloids. Soaps and synthetic detergents belong to this class.
Properties of Colloids: Like solute particles, colloidal particles also diffuse, though slowly from a region of higher to a region of lower concentration.
Colloidal particles tend to settle down very slowly under the influence of gravity. The rate of sedimentation can be increased to a large extent by the use of a high speed centrifuge known as ultracentrifuge.
When a beam of light is passed through a true solution the path of the beam through the solution is not visible. But if the light is passed through a sol, its path becomes visible. This phenomenon is known as the Tyndall effect after the name of its discoverer.
Colloidal particles in most dispersions move either towards the cathode or the anode when an electric field is applied to the dispersion. This movement of colloidal particles under an applied electrical field is called electrophoreses; this forms the basis of electrodeposition of colloids. Based on this principle, a very important process is used in the technique of DNA- fingerprinting what we call as ‘Gel-electrophoreses’ in which the DNA fragments that act as colloidal particles of different sizes & lengths are segregated out under the influence of an electric field. Rubber gloves and other intricate rubber articles are made by electroplating moulds with rubber from rubber sol. Colloids in sewage water are also removed by electroplating method. Colloidal ash particles in chimney gases are removed by passing them between high voltage plates.
Importance of Colloids & Everyday Physics: The synthetic fibers, plastics and rubbers are composed of molecules in the colloidal range as are also the molecules of proteins, starch and cellulose. One of the oldest, colloidal systems is the topsoil of the earth. Protoplasm is also a well known colloidal system.
In recent years a new field of study of particles in the range of size of colloidal particles has been developed. It is the study of gases, chromosomes, proteins and viruses. The methods and techniques of studying colloidal particles are now being applied to those particles in the filed of biochemistry…
The Forces in Nature
Newton’s First Law of motion: If the engine of a moving car is switched off, the car is suddenly brought to a rest by friction and air resistance. If such opposing forces were absent, it is believed that a body, once set in motion, would go on moving for ever with a constant speed in a straight line. That is, a force is not at all needed to keep a body moving with uniform velocity so long as no opposing forces or force act on it.
It was Galileo (1564-1642) who proposed this idea which arose from his two inclined planes experiments in which he found that a ball once allowed to roll down one smooth slope, it rose very nearly to the same height on the other. He felt that only friction stopped the ball from reaching exactly the same height on the second slope that it had started from on the first. He predicted that if the second plane was horizontal and if friction was absent altogether, the ball would go on moving for ever (or until a force stopped it).
Newton (1642-1727) developed further on Galileo’s work and virtually started from the point where Galileo had left and consequently, went into laying the foundation of his celebrated three laws of motion, the first law of course, being a summary of Galileo’s ideas. It states that: “Every body continues to be in a state of its rest or uniform motion in a straight line unless, being compelled by an external force to act otherwise.”
The question we should ask about a moving body is not ‘what keeps it moving’, but ‘what changes or stops its motion?’ The smaller the external force opposing a moving body, the smaller is the force needed to keep it moving with a uniform velocity.
Everyday Physics in Newton’s first law: Several phenomena illustrate this law, say for example:
♥If a moving vehicle comes to a sudden stop, a passenger inside it jerks forward if he is not careful; this happens because, although, his feet remain in contact with the floor of the vehicle and come to rest suddenly, the upper part of the person’s body retains the forward movement. Similarly,
♥ Removing dust from a carpet by beating it.
♥ Running fast before a long jump or
♥ A small hole formed in a window pane on a bullet being fired at it, all illustrates Newton’s first law of motion magnificently.
Newton’s Second Law: This law states that: “The rate of change of momentum of a body is proportional to the applied force and takes place in the direction in which the force acts or applied.” What we can deduce from this law is that the acceleration (the rate of change of velocity) of a body is directly proportional to the unbalanced force acting on it and is inversely proportional to its mass whereas, the direction of acceleration shall be the same as that of the force. The simple mathematical equation for Newton’s second law is given as: F= ma.
♥The second law comes into play when brakes are applied to a vehicle and a large, decelerating force is created to bring the vehicle to a stop.
♥It also explains why a glass bottle dropped in sand would not break? The simple answer for this lies in the fact that the yielding sand permits a lower deceleration than that a hard floor actually does.
Newton’s Third Law: The genesis of this law is contained in the natural fact that forces never occur singly but always in pairs as a result of the action between two bodies. For example, when you step forward from rest your foot pushes backwards on the earth and the earth exerts an equal but opposite force forward on you. This simply boils down to the fact that in every case in nature, two bodies and two forces are involved. As in the above example, the small force that you exert on the large mass of the earth, gives no noticeable acceleration to the earth, but the equal force that the earth exerts on your comparatively a very much smaller mass, causes you to accelerate. But remember that the equal and opposite forces do not act on the same body, of course; if they did, there could never be any resultant forces and hence, acceleration would be impossible because; they could have simply amounted to canceling out each other.
This behaviour of forces is summoned up by Newton’s third law of motion which clearly states that: “To very action there is an equal and opposite reaction.” Say for example, If a body A exerts a force on a body B, then B exerts an equal and opposite force on A along the same line of action.
You really need to appreciate the third law and the effect of friction when stepping from a rowing boat. You push backwards on the boat and, although the boat pushes you forwards with an equal force, it is itself now moving backwards (because friction with the water is slight); this reduces your forward motion by the same amount – and you might fall in! Similarly,
If a gun is fired, a recoil is felt – so called a sort of a ‘kick’; the forward thrust of a projectile is matched by the backward thrust of the gun.
The motion of a rocket is much like the motion of a balloon loosing air such that the escaping air from the balloon becomes a backward movement balanced equally by the forward movement of the balloon. Exactly similar is the phenomenon that governs the working of rockets and the science of rocketry as such and this is what that remains to be the essence of Newton’s third law of motion which states that for every action, there is an equal and opposite reaction.
The flow of the gas through the nozzle of a rocket that lies at its rear end produces a pushing force what we call as thrust away from the flow of exhaust gases from its nozzle. The said pushing force or forward thrust is produced as a consequence of the burning of the rocket fuel, better called as propellant that may be as solid or liquid propellant. In the rockets, using liquid propellant, the fuel that is generally a mixture of liquid oxygen and hydrogen is burnt in a separate chamber in the presence of an oxidizer being stored in a separate chamber and thus, both are burnt in a separate combustion chamber such that the force with which the exhaust of the gases from the fuel burning spurt out of the nozzle, an equal amount of forward thrust, the rocket will gain and hence, proving Newton’s third law of motion in exactness.
What is Friction? Friction is the resistance offered to the moving bodies or to the movement of one material against another. It could be any two materials one can think of.
The force of friction has such a utility in our life that many of the jobs and activities we carry on in our life would have been just impossible without friction being involved therein. Without friction, the belts of machines would slip, nails and screws wouldn’t hold, feet would not grip the floor or pavement and wheels of our automobiles would just spin without making things move anymore. Interestingly at the same time, in many cases, especially in machines, we actually try to reduce friction as much as possible. Why? The simple answer is to reduce the maximum wear and tear of the machine or its parts and to optimize the energy efficiency that is being used to run the machine.
So far the nature of friction in different material bodies is concerned, in the case of solid things; the friction is caused mainly by unevenness in the surfaces that touch each other. Thus, more smoother these surfaces are, the less will be the friction. Interestingly enough, friction between unlike materials is less than that between substances of the same kind. So, when we lubricate the surfaces say, when we oil the bearings of machines, we actually reduce the friction by substituting liquid friction for solid friction. The most common of the frictions we observe around are between solids which predominantly are of two kinds called as sliding and rolling friction.
The rolling friction is generally and invariably less than the sliding friction. That is why, the wheel was one of the man’s greatest inventions ever because; it made it possible to substitute rolling friction for sliding friction in pulling loads.
Consider the following analogy to understand the implications of substituting the rolling friction for sliding friction that man has achieved by inventing the wheel.
Suppose a large and heavy stone is being carried over a rough surface. It would take perhaps a dozen men to drag the same over the same surface by sliding friction. Now, if we put the same stone on rollers, it might then take just six men to pull the same stone over the same rough surface. Taking it further, if the same stone is now put on a cart provided with two wheels, it is very likely that only four men might do this job comfortably because, by using the cart, we now have passed on the so called sliding friction to the axle of the wheels and rolling friction over the same rough surface. In the next possibility, we now grease the axles and alternatively make the rough road smooth, quite likely that now only two men can pull the cart without much fuss. And lastly, we have now provided the cart wheels with ball bearings and thus, only one man can now move the same stone with much ease and comfort! This is how the friction has facilitated and does facilitate the many going-on in the material world.
Connecting concepts: Why do the airplanes or ships are being made the way they are? From what all that has been described above, it does not anywhere indicate to mean that the friction has a role to play in the solid objects only. In fact, water and air does create friction and in order to restrict or minimize the same, we have the body of our airplanes being streamlined to reduce the air resistance or so called air friction. In the same vein, the boats are being shaped the way they are so as to cut down water friction when a boat is being plied over the water surface….
Gravitation is the force of attraction that acts between all objects because of their mass – that is, the amount of matter they are made of. Because of gravitation, objects that are on or near the earth are pulled toward it. The gravity of the moon and the sun causes the ocean tides on the earth. Gravitation holds together the hot gases in the sun. It keeps the planets in their orbits around the sun, and it keeps all the stars in our galaxy in their orbits about its centre. The gravitational attraction that an object has for objects near it is called the ‘force of gravity’.
Although the effects of gravity are easy to see yet, an explanation for gravitational force has puzzled people for centuries. The ancient Greek philosopher Aristotle taught that heavy objects fall faster than light ones. This view was accepted for many centuries. But in the early 1600’s, the Italian scientist Galileo introduced a different view of gravity. According to Galileo’s theory, all objects fall with the same acceleration (rate of change of velocity), unless air resistance or some other force slows them down.
Ancient astronomers studied the motions of the moon and the planets. But these motions were not correctly explained until the late 1600’s, when the English scientist Sir Isaac Newton showed that there is a connection between the forces that attracts objects to the earth and the way the planets move.
Newton’s theory of gravitation: It says that the gravitational force between two objects is proportional (related directly) to the size of their masses. That is, larger the mass, larger shall be the force of attraction between such two objects. The theory refers to mass rather than weight because; the weight of an object on the earth is really the strength of the earth’s gravity on that object. On different planets, the same object would have different weights, but its mass would always be the same. Also, the gravitational force is inversely (oppositely) propositional to the distance between the centers of gravity of the two objects squared (multiplied by itself). For example, if the distance between the two objects doubles, the force, between them becomes a fourth of its original strength.
Expressed in the form of an equation, the gravitational force F=Gm1m2/r2 where ‘G’ is the universal gravitational constant, and ‘r’ is the distance between two bodies of masses m1 and m2 respectively.
The universal gravitational constant (G) has its value determined at 6.67 x 10-11. Incidentally and curiously enough, the value of G has so far been found to be the same in all parts of the universe.
This force i.e. gravitational force of attraction is always present between two bodies although, in many situations, when two bodies are extremely small and the distances are very small, this force is negligible in strength as compared to other forces (the electromagnetic force, the strong forces and the weak forces).
Newton published his theory of gravitation in 1687. Until the early 1900’s, scientists observed only one phenomenon that disagreed with the predictions of Newton’s theory. This was the motion of the planet Mercury, but factually, the disagreement was very small.
Gravity is the gravitational force between the earth (or other planet or satellite) and a body near its surface. This gravitational force (gravity) is measured as the weight of the body on earth (or planet). The term ‘gravity’ thus, must be used in a different sense from that of the term ‘gravitation’. While gravitation is a phenomenon which gives rise to a force of attraction between all bodies. Whereas, gravity is the measurable effect of the phenomenon of gravitation between a body and a planet. This implies that the term gravity is invariably being used to express the weight of a body on a planet. The weight of an object however, can also be expressed in terms of the acceleration due to gravity (g) experienced by a body near the earth’s surface.
Gravitation is the force which pulls every object in the universe towards every other object in the universe. It is the force that makes a body fall through space towards the earth.
It was not until the time of Galileo (1564-1642) that any effort was made to measure the effect of gravity. It was believed until that time that the speed with which a falling object struck the ground from any height depended, but on the weight of the object. In order to confirm this belief, Galileo went on to perform a small experiment.
Galileo dropped objects of different weights from the leaning tower of Pisa to show how the “force” of gravity caused them to fall & whether heavier objects fall earlier than the lighter ones. He showed that a heavy weight and a light weight object when dropped together, reached the ground at the same time.
Galileo also rolled a ball down a slope slowly enough to measure its position at definite times. He found that the increase in the speed of the ball was proportional to the time it was rolling. This means that, at the end of two seconds, it was traveling twice as fast as at the end of one second, and, at the end of three seconds, it was traveling three times as fast, and so on.
He also found that the distance it traveled was proportional to the square of the time it spent traveling. (The square of a number is the number multiplied by itself). So at the end of two seconds, it was four times as far as at the end of one second. At the end of three seconds, it was nine times as far away, and so on. Taking a cue from Galileo’s discoveries and revealations,
Sir Issac Newton made the next great discoveries about gravitation. Newton assumed that the force which attracts any body towards the earth should grow less as the distance grows greater. Out of his studies and the observations of others came, Newton’s Law of Universal Gravitation. The basic idea of this law is that: “If the mass (amount of matter) of one of the two attracting bodies is doubled, the gravitational attraction will also be doubled; but if their distance apart is doubled, the force of gravitational attraction between such two bodies will be only one-fourth as great.”
Albert Einstein who finally came on the scene and attempted to answer the same predominant question “What is gravity?” by explaining that it is due to the shape of four-dimensional space-time. This is a very complicated theory requiring utmost scientific training to understand. His latest theory is related the gravitational “field”, to the electric, magnetic and electromagnetic fields. But we can say and attribute this to the frailty of human mind that actually, no one has yet been able to explain exactly what gravity is to everyone’s satisfaction.
We do know, however to the extent that the acceleration (increase in speed) caused by gravity is 10 metres per second each second. That means the speed of a falling body is increased 10 metres per second for each second it is falling. At the end of one second, it is dropping at a speed of 10 metres per second; at the end of two seconds, 20 metres per second; and so on. At the end of the first second, a falling object will be 5 metres down; at the end of two seconds, 20 metres, and at the end of 3 second, 45 metres.
A spacecraft may make several kinds of trips into space. It may be launched into orbit around the earth, rocketed to the moon, or sent past a planet. For each trip the spacecraft must be launched at a particular velocity (speed and direction). The job of the launch vehicles is to give the spacecraft this velocity. If the spacecraft caries a crew, the spacecraft itself must be able to slow down and land safely on the earth.
Overcoming gravity is the biggest problem in getting into space. Gravity pulls everything to the earth and gives objects their weight. A rocket overcomes gravity by producing thrust (a pushing force). Thrust, like weight, can be measured in newtons or pounds. To lift a spacecraft, a rocket must have a thrust greater than its own weight and the added weight of the spacecraft. The extra thrust accelerates the spacecraft. That is, it makes the spacecraft go faster and faster until it reaches the velocity needed for its journey.
Rocket engines create thrust by burning large amounts of fuel. As the fuel burns, it becomes a hot gas. The heat creates an extremely high pressure in the gas. The gas leaves the rocket engine at high speed through the rocket nozzle. The reaction force created by the acceleration of the gas particles leaving the rocket engine causes the forward push on the rocket. This forward push on the rocket is the thrust, which is strong enough to lift the rocket from the ground.
Rocket fuels are called propellants. Liquid-propellant rockets work by combining a fuel, such as kerosene or liquid hydrogen, with an oxidizer, such as liquid oxygen (LOX). The fuel and oxidizer burn violently when mixed. Solid-fuel rockets use dry chemicals as propellants.
Engineers rate the efficiency of propellants in terms of the thrust that 1 kilogram of fuel can produce in one second. This measurement is known as the propellant’s specific impulses. Liquid propellants have a higher specific impulse than most solid propellants. But some, including LOX and liquid hydrogen, are difficult and dangerous to handle. They must be loaded into the rocket just before launching. Solid propellants are loaded into the rocket at the factory, and are then ready to use.
The moon lies within the earth’s gravity. But at the moon’s distance, the force of gravity is very weak. A spacecraft launched at 40,200 kilometers per hour – just 1,100 kilometers per hour greater than the speed necessary to reach the moon – can escape the influence of the earth’s gravity. The speed of 40,200 kilometers per hour which corresponds to 11.2 kilometers per second is called escape velocity. A spacecraft sent at this speed then comes under the influence of the sun’s gravity rather than Earth’s gravity which it has already escaped of and thus, goes into orbit around the sun; close to the earth’s orbit around the Sun.
The earth itself circles the sun in its orbit with a speed of 29.8 kilometers per second. A spacecraft launched form the earth also travels this fast in relation to the sun. The craft’s escape velocity is used up in getting away from the earth. It does not affect the speed of the spacecraft around the sun. Escape velocity can send the spacecraft into orbit around the sun. But it cannot send the craft to a planet.
An astronaut orbiting the earth in a space vehicle with its rocket motors off is sometimes described a being ‘weightless’. If weight means the pull of the earth on a body, then the statement, although commonly used, is misleading. A body is not truly weightless unless it is outside the earth’s (or any other) gravitational field, whereas, in fact, it is the gravity only that keeps an astronaut and his vehicle in orbit.
On earth, we are made aware of our weight because, the ground (or whatever that supports us) exerts an upward push on us as a result of the downward push that our feet exert on the ground. It is in fact, this upward push essentially which makes us ‘feel’ the force of gravity.
An astronaut in an orbiting space vehicle is not unlike a passenger in a freely falling lift. The astronaut is moving with constant speed along the orbit, but since he is traveling in a circle he has a centripetal acceleration, which is of the same value as that of his space vehicle and is equal to “g” at that height. The walls of the vehicle exert no force on him; he is thus, unsupported and floats about with no apparent weight and hence, he appears to be ‘weightless’. To be strictly accurate, we should not apply the term ‘weightless’ to him, however, unless by weight we were to mean the force exerted on (or by) a body by (or on) its support.
When a lift suddenly stars upwards the push of the floor on our feet increases and we fell heavier, on the other hand, if the support is reduced when the lift starts moving downwards we seem to be lighter. In fact we judge our weight from the upward push exerted on us by the floor. If our feet are completely unsupported we experience weightlessness. Passengers in a lift which had a continuous downward acceleration equal to g would get no support from the floor, since they too would be falling with the same acceleration as the lift. There would be no upward push on them and they would feel to sensation of weight.
The Pressure
When a force acts on a body, it sometimes becomes necessary to consider not only the force but, also the area on which it acts. For example, a tractor with wide wheels can move over soft ground because its weight is spread over a large area. As a result, the pressure on the ground is less and it does not sink so deeply. On the other hand, a nail can be driven into a wood because; the very high pressure exerted over the small area of the point is more than the wood can stand. The greater the area over which a force acts, the less is the pressure; conversely, the smaller the area, the greater the pressure.
Based on this observation and fact, the pressure is thus defined as the force per unit area or Pressure= force/area.
This is how; we can insert a nail into the wood.
Among the three states of the matter, the most discernible of the effects that pressure has on a matter state, has been noticed in the liquids although, the impact of increasing pressure in gases has been noticed in their being converted into a liquid state and the same has its impact in certain crystalline solids being noticed in their exhibiting a unique phenomenon of piezoelectricity. Yet, the practical application of pressure & its impact has glaringly been noticed in liquids and thus, constitutes a part of our everyday Physics…
This is manifested when the weight of a liquid pulls it down into its container, causing a pressure on the container and on any object in the liquid.
Pressure Laws for Liquids: These pertain to the behaviour of a liquid in an open vessel and may be stated as below:
Pressure in a liquid increases with depth: Because, the farther down you go in a liquid, the greater shall be the weight of the liquid above.
Example: Water spurts out fastest and farthest from the lowest hole. That is why; the dam of a reservoir is being made thicker at the bottom than at the top just because, the water pressure is greater at the bottom.
ii) Pressure at one depth acts equally in all directions:
Example: The can of water having similar holes all round it and at the same level, water will come out as fast and as far from each and every hole of the can.
iii) A liquid always finds its on level: The pressure at the foot of a liquid column depends only on the vertical depth of the liquid and not on the width or shape of the tube.
iv) Pressure depends on the density of the liquid: The denser the liquid is, the greater shall be its pressure at any given depth of a liquid column.
♥Everyday Physics involved in the use of Pressure:
I) HYDRAULIC MACHINES: Hydraulic machines work by using pressure in liquids. Their action and the principle of their functioning depend on two chief facts about liquids. Firstly, that the liquids are almost incompressible (that is, their volume cannot be reduced by squeezing or compression) and secondly, that the liquids always pass on to all parts of the liquid, if any pressure applied onto them. This means that they distribute the pressure evenly among all the parts of the liquid.
The principle on which hydraulic machines work is used in our practical life. In hydraulic car brakes, when the brake pedal is pushed or pressed down, the piston in the master cylinder exerts a force on the brake fluid that being a liquid distributes the pressure and hence, the resulting pressure is transmitted equally to eight other pistons. Once this is done, these pistons eventually, force the brake pads against the wheels and hence, the car is stopped. Similarly, a hydraulic jack on which cars are mounted in a workshop has a platform on top of a piston and is used in garages to lift cars. A hydraulic press is constructed similarly…
A one line answer to this question is the effect of atmospheric pressure. Although, the air forming the earth’s atmosphere stretches upwards for hundreds of kilometers, it thins out very rapidly after ten kilometers or so. Like a liquid, it also exerts a pressure in all directions which significantly decreases with height to which what we call as “atmospheric pressure.” At sea level, atmospheric pressure is very large and equals about 100,000 Pa = 100 kPa. (refers to kilopascals). We as individuals, do not normally feel atmospheric pressure because, the pressure inside our bodies is almost the same as that of the outside atmospheric pressure. Our ears however, are sensitive to pressure changes and this is why people experience ear ‘popping’ in an aircraft at take-off. This is due to the outside air pressure falling as the aircraft climbs up so that a pressure difference is created between the air in the middle part of the ear and that in the outer ear as a consequence of which the eardrum becomes distorted. Swallowing helps to equalize the pressures. Modern high-flying aircrafts have pressurized cabins in which the air pressure is increased sufficiently above that of the outside pressure in order to safeguard the crew and passengers from difficulty with breathing.
The large value & practical applications of atmospheric pressure was first demonstrated by von Guericke, who invented the vacuum pump. (Vacuum is a space that has no air). About 1650, he fitted together two large hollow metal hemispheres to give or form an airtight-sphere, which he then evacuated. So good was his pump that it took two teams, each of eight horses, to separate the hemispheres.
♥When you use a drinking straw, you suck at the straw to the consequence that your lungs expand the moment you try to suck in and thus, the air passes into them from the straw. Atmospheric pressure pushing down on the surface of the liquid in the bottle is now greater than the pressure of the air in the straw and so forces the liquid or your favorite cola beverage right up into your mouth. (Remember: flow from high to lower pressure).
When a rubber sucker is moistened and pressed on a smooth flat surface, the air is moistened and pressured on a smooth flat surface, the air inside is pushed out. Atmospheric pressure then holds it firmly against the surface. Such rubber Suckers are extensively used by all of us as towel holders at home or in industry for lifting metal sheets etc.
In a vacuum cleaner, the fan creates a partial vacuum in the bag which causes air, carrying dust, to rush through the cleaning attachment into the bag.
In vehicles with ‘power brakes’, atmospheric pressure supplies an extra force to the brakes. In this case, the engine removes air from both sides of a piston in a cylinder which links the braking system to the brake pedal. When the latter is pushed, a valve opens to let air into the right-hand side of the piston and the resulting pressure differences forces the piston to the left.
Syringes of various kinds are used by doctors to give injections and by gardeners to spray plants. A syringe consists of a tight-fitting piston in a barrel, and is filled by putting the nozzle under the liquid and drawing back the piston. This reduces the air pressure in the barrel and atmospheric pressure forces the liquid up into it. Similarly, pushing down the piston drives liquid out of the nozzle.
When the piston of a bicycle pump is pushed in, the air between it and the tyre valve is compressed. This pushes the rim of the plastic cup washer against the wall of the barrel to form an airtight seal. When the pressure of the air between the plastic washer and the valve is greater than the pressure of the air in the tyre, air is forced past the tyre valve into the tyre. When the piston is drawn back, the tyre value is closed by the greater pressure in the tyre. Atmospheric pressure then forces air past the plastic washer (which is no longer pressed hard against the wall) into the barrel.
Pressure and Diving: For every 10.3m (approx. 10m in sea water) that a diver descends down, the pressure in his body increases by one atmosphere. The aqualung diving suit incorporates a rubber helmet fitted with a circular window and is supplied with air from compressed-air cylinders carried on the wearer’s or diver’s back. Using this apparatus, experienced divers can descend for very short periods to a maximum depth of about 60m, where the total pressure is amazingly about seven atmospheres. At depths in the neighbourhood of 45m, they can work for periods of about 15 minutes. It is dangerous to stay longer at these depths, since, as a result of the high pressure, an excess of nitrogen dissolves in the blood and on return to the surface, nitrogen bubbles form in the blood in the same way that bubbles are formed in a bottle of soda water when the cork is removed. Such a condition causes severe pain or even death in a condition so called as ‘decompression sickness’. Noted that the danger to health from the painful ‘diver’s bends’ as this condition is commonly called as is, greatly reduced if a mixture of 8 per cent oxygen and 9.2 per cent helium is used in gas cylinders.
The collective term applied to all liquids, gases and materials in the molten state is fluid. A solid body is said to be immersed in a fluid when it is either partially or completely surrounded by and in surface contact with the fluid. On earth everything is immersed in a fluid (air). Displaced Fluid is the fluid which would occupy the volume occupied by the immersed part of a solid body.
Any object in a liquid, whether floating or submerged, is acted on by an upward force of upthrust. This makes it seem to weigh less than it does in air. The upthrust arises because, the liquid pressure, which pushes on all sides of the object, is greatest on the bottom where the liquid is deepest. Experiments that were conducted with other liquids and also with gases finally led to the formulation of a general principle what we call as Archimedes principle which states that: “When a body is wholly or partially submerged in a fluid, the upthrust equals the weight of the fluid displaced.”
The upthrust thus, depends on the volume of the object and not on its weight.
Physics behind the FLOATING & SINKING of objects:
Principle or laws of Flotation: A stone held below the surface of water sinks when released: a cork rises, however. Why? This is because, the weight of the stone is greater than the upthrust (also buoyancy or buoyant force) on it (that is, the weight of water displaced) and consequently, there is a net or resultant downward force on it. If the cork has the same volume as the stone, it will displace the same weight (and volume) of water. The upthrust on it when completely immersed is therefore the same as for the stone, but it is greater than the weight of the cork. The resultant upward force on the cork makes it rise through the water. When an object such as a wooden block floats in water, the upthrust equals the weight of the object. The net force on the object is zero, and the weight of water displaced equals the weight of the object in air. This is an example of the principle of flotation. A floating object always displaces its own weight of fluid.
♥Why does an object sink? An object sinks in any liquid that has a smaller density than its own: in other liquids it floats, partly or wholly submerged. For example, a piece of glass of relative density 2.5 (density 2.5 g/cm3) sinks in water (density 1.0 g/cm3). An iron nail sinks in water, but an iron ship floats because its average density is less than that of water.
The direction of buoyancy (or buoyant force) is always vertically upward. It acts at the centre of buoyancy (i.e. at the centre of gravity of the volume of displaced fluid). The weight of the immersed body acts downward, and at the centre of gravity of the solid.
Connecting concepts: How do Ships Float? The material (mostly iron) of which the ship is built is much denser than water, but the ship is built in the form of a large shell or ‘hull’ as it is called. This shell displaces a very large volume of water, and so the ship floats. However, if the ship is loaded to such an extent that its total weight is more than the weight of the water which the whole ship can displace it will sink. The ‘Plimsoll Line’ on the hull of all sea-going vessels is drawn to shop the depths to which the ship can safely be loaded in different parts of the world and ship can safely be loaded in different parts of the world and in different seasons. The winter load lines are lower than the summer ones as sea water is denser in winter (provided the temperature is above 40C) than summer. A ship will float lower in fresh water than in salt water because fresh water is less dense than salt water.
How do Submarines able to move about under water? Submarines are able to move about under the surface of the sea water because; their average density can be controlled. A submarine sinks by taking water into its buoyancy tanks. Once submerged, the upthrust is unchanged but the weight of the submarine increases with the inflow of water and it sinks faster. To surface, compressed air is used to blow water out of the tanks.
Balloons and Airships: A balloon with hot air of hydrogen weighs less than the weight of cold air it displaces. The upthrust is therefore greater than its weight and the resultant upward force on the balloon causes it to rise. A hot air balloon carries a gas burner beneath it; a quick blast on the burner about every 30 seconds keeps the air inside the balloon hot. When balloons are fitted with motor-driven propellers and steering devices, they are known as airships.
Archimedes law of buoyancy for liquid is equally valid for gases. A large, hollow body such as a balloon can displace more than its own weight of air, and so can float in air. Since the air is less dense higher up, a balloon will rise only to the level where the weight of the displaced air becomes equal to its own weight.
(It may be noted that due to its inflammability, hydrogen has been replaced by helium in balloons and airships.)
FLUIDS IN MOTION: BERNOULLI’S PRINCIPLE:
The pressure is the same at all points on the same level in a fluid that is at rest; but this is not so when it is moving. When a fluid moves even with moderate speed, important new forces come into play. The most evident effect is the resistance that air, for instance, offers to the movement of objects through it. The actual resistance force increases with cross-section area of the moving body and especially with its speed of motion. In addition, the shape of the object is of great importance.
A liquid will only flow through a pipe if the pressure at one end of the pipe is higher than that at the other, that is, there is a pressure difference between the ends of the pipe. The pressure at different points in a liquid flowing through (a) a uniform pipe and (b) a pipe with a narrow part is shown by the height of the liquid in the vertical manometers in the figure below:
In (a) the pressure drop along the tube is steady. In (b) the pressure falls in the narrow part B but, rises again in the wider part C. Since in a certain time the same volume of liquid passes through B as it enters A, the liquid must be moving faster in B than in A; the pressure in the liquid thus decreases as the speed of the liquid increases. Conversely an increase of pressure accompanies a fall in speed. This effect, called Bernoulli’s Principle, is stated as follows: “when the speed of a fluid increases, the pressure in the fluid decreases and vice versa.”
Bernoulli effects are to be seen in air streams also. The pressure falls in the moving air stream. Two sheets of paper come together when you blow between them and a paper rises when you blow over it.
Everyday Physics of Bernoulli’s principle: This may be described in other words, as the applications of Bernoulli’s principle in our common day observations.
♥As a fluid comes out of a jet, its speed increases and its pressure decreases. This fact is used in a Bunsen burner in which air is drawn into a carburetor by a jet of petrol in a similar manner.
♥A spinning ball takes a curved path because, it drags air round with it thereby, increasing the speed of the air flow on one side and reducing it on the other.
♥An aircraft, wing, called an aerofoil, is so shaped that the air has to travel farther and so faster over the top surface than underneath. The resultant upward force on the wing provides ‘lift’ for the aircraft. Contrary to what one might expect, the front of the body is broader than the rear. (But if the body is to be a high-speed jet plane or rocket traveling faster than sound, a sharp-nosed shape gives best performance).
♥In the helicopter, the airflow over the surface is produced by whirling the rotating fans, rather than by rapid motion of the whole plane through the air. As a result, a helicopter can hover over one spot on the ground, or even move in a backward direction.
Notwithstanding these all, a number of other familiar observations and devices can be described in terms of Bernoulli’s law or principle, say for example:
? Two cars, passing each other at high speed are in danger of colliding sideways because of the decrease in air pressure in the space between them.
?A strong gale is capable of lifting the roof off a house in the same way.
? In an atomizer (sprayer), a stream of air is blown across the end of a small tube that dips into the liquid. The decreased pressure at the side of this air stream allows normal air pressure, acting on the surface of the liquid in the bottle, to push the liquid up the tube. There the moving air breaks it up into small drops and drives it forward.
As we already know that the nature is entirely made up of the matter what we may broadly classify as living and non living matter. There is a great unifying principle of the nature that at the hierarchical structural organization of the matter, there is hardly any difference between the living and the non-living matter at least, at the subatomic level. The same is true about the energy transformations that the matter constantly undergoes in nature such that we come to witness a whole lot of energy transformations around, incessantly been taking place to the effect that the total energy content of the universe has always remained constant, irrespective of the fact that during each of such energy transformations, some amount of energy does go waste to the environment in the form of heat. But then, this is how that the matter-energy complex of the nature has been maintained ever since the formation of earth and the evolution of the first life has happened on it…
Going by the above inherent truth of the nature, it is not hard to conceive even for a layman that what all we study under Physics as various phenomena and properties of the physical matter; being manifested either as heat, light, electricity or magnetism etc. are, but the consequence of the changes in the matter and the transformations of the energy that occur incessantly in the matter-energy complex of the nature. And it is the sheer ingenious nature of the human mind that this matter-energy complex of the nature has been innovatively exploited by the man to his advantage and given us something of the nature of everyday PHYSICS.
Before do we go about discussing the various manifestations of energy, it sounds rather pertinent to know what energy is all about in its simplest terms:
What is Energy?
Energy is the ability to do work. Energy is that which stands at the back of the forces, and makes the forces possible. Let’s try to understand this by considering the example of an automobile.
To make the motor go, a force must be used. Something must provide that force. That something is energy. Where does it come from? It comes from the petrol, and the energy is let loose by burning the petrol in the cylinder. This energy puts certain forces into operation, forces which produce motion through the gears and wheels of the car. The result is that the engine does work, and energy has made it possible.
There are two kinds of energy, potential and kinetic. First let’s understand potential energy. In the case of petrol the molecules are held together by electrical forces. Energy is stored in these molecules, potential energy. When the petrol explodes that potential energy is used up.
Another example of potential energy is a suspended weight. There is stored-up energy in the weight which we can release just by letting it drop. Water at the top of the falls or behind a dam also has potential energy.
Now suppose the weight drops, or the water goes down over the falls. The mere fact that it is moving at a certain speed enables it to do work, and this energy is called “kinetic energy”. It is energy that is derived from the weight of a moving body and its speed. As a body falls, it loses potential energy and gains kinetic energy. But he amount gained is exactly equal to the amount lost. In fact, the total amount of energy in the universe is always the same. We cannot create it or destroy it. What we actually do, whether we use falling water, coal, or the atom, is to change it from one form to another.
Atomic energy is energy obtained from the atom. Energy atom has in it particles of energy. Energy holds the parts of an atom together. So in atomic energy the core of an atom is the source of the energy, and this energy is released when the atom is split.
But there are actually two ways of obtaining energy with atoms. One is called “fusion” and one is called “fission”. When fusion takes place, two atoms are made to form one single atom. The fusion of atoms results in the release of a tremendous amount of energy in the form of heat. Most of the energy given off from the sun comes from fusion taking place in the sun. This is one form of atomic energy.
Another form of atomic energy comes from the fission process. Fission happens when one atom splits into two. This is done by bombarding or hitting atoms with atomic particles such as neutrons (one of the particles that make up the atom).
An atom doesn’t split every time it is bombarded by neutrons. In fact, most atoms cannot be made to split. But uranium and plutonium atoms will split under proper conditions.
One form of uranium called “U-235” (it is known as an “isotope” of uranium), breaks into two fragments when it is struck with neutrons. And do you know how much energy this gives? Surprisingly enough, One kilogram of U-235 gives much more than 1,000,000(1million) times as much energy as could be obtained by burning a kilogram of coal. A tiny pebble of uranium could run an ocean liner or an airplane or even a generator. So you see, atomic energy may well to be the chief source of energy for man in the future.
What is Heat?
At one point of time, it was believed that heat was a kind of fluid that passed from hot bodies into cold ones. This imaginary fluid was called “caloric.”
Today, we know that heat is the constant motion of atoms and molecules in objects. In the air, for instance, the atoms and molecules move about freely. If they move rapidly, we say that the temperature of the air is high or that the air is hot. If they move slowly, as on a cold day, we feel the air to be cool.
Atoms and molecules can’t move about as freely in liquids and solid objects as they are in gases, but motion is still present.
Even at the temperature of melting ice, the molecules are in constant motion. A hydrogen molecule at this temperature moves with a speed of 1,950 meters per second. Can you imagine? In 16 cubic centimeters of air, a thousand million million collisions per second occur among the molecules every second!
Heat and temperature are not the same thing. A large gas burner may be no “hotter” than a small burner, but it may supply more heat because it burns more gas. Heat is a form of energy, and when we measure heat, we essentially measure nothing, but energy. Quantity of heat is measured in calories. A calorie is the amount of heat energy required to raise the temperature of one gram of water by one degree centigrade. But the temperature of a body only indicates the level to which the heat energy that it contains brings it. Temperature is indicated by a thermometer and is expressed in degrees.
When two bodies are brought together and there is no passage of heat energy from one to the other, we say that they are at the same temperature. But if heat energy is lost by one (its molecules are slowed down), while this same energy is gained by the other (its molecules move faster), we say that heat has passed from the hotter to the colder body, and that the first body was at a higher temperature than the second one.
All things are made up of atoms or molecules, which are always moving. The motion gives every object its internal energy. The level of an object’s internal energy depends on how rapidly its atoms or molecules move. If they move slowly, the object has a low level of internal energy. If they move violently, it has a high level. Hot objects have higher internal energy levels than do cold objects. The words hot and cold refer to an object’s temperature.
Given thus, the Temperature is an indication of an object’s internal energy level. A thermometer is used to measure temperature. Thermometer has a numbered scale so that temperature can be expressed in degrees. The two most common scales are the Celsius, or centigrade and the Fahrenheit scales.
The temperature of an object however, also determines whether that object will take on more internal energy or lose some when it comes into contact with another object. If a hot rock and cold rock touch each other, some of the internal energy in the hot rock will pass into the cold rock as heat. If a thermometer were placed on the hot rock, it would show the rock’s temperature falling steadily. A thermometer on the cold rock would show a steadily rising temperature. Eventually, the thermometers on the two rocks would show the same temperature. Then, no further flow of heat would occur.
Just as water flows only downhill, so heat flows only from an object at a higher temperature to an object at a lower one. The greater the difference in temperature between the two objects, the faster the heat will flow between them. The rapidly moving atoms or molecules in the hot object strike the less energetic atoms or molecules in the cold object and speed them up. In this way, internal energy in the form of heat passes from a hot object to a cold object.
It is important to recognize that temperature and heat are not the same thing. Temperature is simply an indication of the level of internal energy that an object has. Heat, on the other hand, is the energy passed from one object to another. For example, a red-hot spark from a fire is at a higher temperature than that of boiling water in a saucepan but the latter has much more heat and would burn you more severely if you split it over yourself.
Do you ever find yourself asking: I wonder how hot it is? Or: I wonder how cold this is? If you are interested in heat, just imagine all the questions about heat that the scientists wanted to know! But the first step in developing the science of heat is to have some way of measuring it. And that’s why the thermometer was invented. “Thermo” means “heat,” and “meter” means “measure,” so a thermometer measures heat.
The first condition about having a thermometer must be that it will always give the same indication at the same temperature. With this in mind, an Italian scientist called Galileo began certain experiments around 1592 (100 years after Columbus discovered America). He made a kind of thermometer which is really called “an air thermoscope.” He had a glass tube with a hollow bulb at one end. In this tube there was air. The tube and bulb were heated to expand the air inside, and then the open end was placed in a fluid, such as water.
As the air in the tube cooled, its volume contracted or shrank, and the liquid rose in the tube to take its place. Changes in temperature could then be noted by the rising or falling level of the liquid in the tube. So here we have the first “thermometer” because it measures heat. But remember, it measures heat by measuring the expansion and contraction of air in a tube. So it was discovered that one of the problems with this thermometer was that it was affected by variations of atmospheric pressure, and therefore, wasn’t completely accurate.
The type of thermometer we use today uses the expansion and contraction of a liquid to measure temperature. This liquid is hermetically sealed in a glass bulb with a fine tube attached. Higher temperature makes the liquid expand and go up the tube; lower temperature makes the liquid contract and drop in the tube. A scale on the tube tells us the temperature.
This kind of thermometer was first used about 1654 by the Grand Duke Ferdinand II of Tuscany.
A Clinical thermometer: This is a special kind of thermometer used by doctors and nurses. Its scale covers only a few degrees on either side of the normal human body temperature of 370C. The tube has a constriction (that is, a narrower part) just beyond the bulb. When the thermometer is placed under the patient’s tongue the mercury expands, forcing its way past the constriction. When the thermometer is removed (after 1 minute or so) from the mouth, the mercury in the bulb cools and contracts, breaking the mercury thread at the constriction. The mercury beyond the constriction stays in the tube and allows the body temperature to be read. After the reading has been taken, the mercury is returned to the bulb by shaking the thermometer.
Refrigeration:
The temperature of an object can be lowered by bringing it in contact with a colder object. The temperature difference causes heat to flow from the warmer object into the colder one. For example, ice put in an insulated chest keeps food cold by removing heat from it. Another way to remove heat from an object without using a colder object is mechanical refrigeration. Mechanical refrigeration works by changing a substance called a refrigerant which is commonly called as Freon in refrigerators and air-conditioners, from a gas to a liquid and back to a gas again. In a refrigerator, for example, a compressor squeezes a gaseous refrigerant (Freon) into a small space. The compression reduces the refrigerant’s disorder so much that it becomes a liquid as we already know that a gas on being compressed can be converted into a liquid state. The compressed liquid refrigerant then expands at a value leading to pipes in the simulated part of the refrigerator. As the pressure falls, so does the temperature and the refrigerant absorbs heat from the foods in the refrigerator. As heat flows out of the foods, their temperature falls with the heat from the foods is passed on to the liquid refrigerant & raises its temperature. The warmed refrigerant thus, becomes a gas again and then, flows through pipes back to the compressor. There, the refrigeration cycle begins again.
♥Thermos Flask:
The thermos flask is a container that keeps liquids hot or cold for many hours. It is also called a vacuum flask or Dewar flask. Thermos bottles vary widely in size, ranging in capacity from 60 ml to 60 l, and have many uses. They are commonly used to carry coffee, juice, milk or soup. Vacuum flasks are also used in scientific and medical work to store chemicals and drugs, to transport tissues and organs, and to reserve blood plasma. Sir James Dewar, a British chemist, invented the vacuum flask in 1892. He developed it for storing liquefied gases. Although his flask was designed to prevent the entry of heat from outside the container, it worked equally well in keeping liquids hot by reducing the loss of heat from the inside. The modern thermos flask has the same basic design as Dewar’s flask. It blocks the three processes through which heat is transferred – conduction, convection and radiation. A typical thermos flask has an inner container that consists of two glass flasks, one within the other. Glass does not transmit heat well, and so reduces heat transfer by conduction. The flasks are sealed together at their lips by melting the glass. Most of the air between the flasks is removed to create a partial vacuum. This vacuum hinders heat transfer by convection because it has so few air molecules to carry heat between the flasks. The facing surfaces of the flasks are coated with a silver solution. These coated surfaces act like mirrors and reflect much of the heat coming from either the inside or outside of the container back to its source. In doing so, they prevent heat transfer by radiation. Other features of thermos flasks help minimize both the loss and entry of heat. Most thermos flasks have a small mouth, which reduces heat exchange. The flasks are closed with a stopper made of cork, plastic or some other material that conducts heat poorly. The fragile inner container of a thermos flask is encased in metal or plastic for protection. A rubber collar around the mouth holds the inner container in lace, and a spring at the base serves as a shock absorber.
Cooling by evaporation is also used in an air-conditioning unit exactly in the same manner as it happens in the working and operation of a refrigerator. In an air conditioner, the warm air is pulled in through a dust filter by a fan and cooled by supplying latent heat to the liquid evaporating in the coiled pipe.
THERMAL EXPANSION: Affect of heat on three states of the Matter: When heat flows into a substance, the motion of the atoms or molecules in the substance increases. As a result of their increased motion, the atoms or molecules take up more space and thus, the substance expands. The opposite occurs when heat flows out of a substance that is the atoms or molecules move more slowly. They therefore, take up less space and the substances contracts.
All gases and most liquids and solids expand when heated. But they do not expand equally. If a gas, a liquid, and a solid receive enough heat to raise their temperatures by the same amount, the gas will expand the most, the liquid much less, and the solids, the least of all.
Affect of heat on Solids: Did you ever imagine why do railway lines are built with gaps in between the railings? Well! The plausible explanation for the same is to provide space to the iron railings as they tend to expand during extreme summers. In fact, most solids expand when heated and contract when cooled, but only by a very small amount. Nevertheless, large forces are created, which are under some circumstances useful and in others a nuisance. The kinetic theory’s explanation is that the molecules of a solid vibrate more vigorously when heated, forcing each other a little farther apart thus, expansion in all directions results. Cooling on the other hand reduces the vibrations and the forces of attraction between molecules bring them closer together and the solid contracts slightly.
Connecting concepts: How does a thermostat work? A thermostat makes use of a bimetal strip that is made of two different metals noted for exhibiting a remarkable property of thermal expansion; an effect of heat on solids. A bimetal strip as is being made of equal lengths of two different metals called as brass and invar (an iron-nickel alloy with a very small expansion on the application of heat – the name is taken from the word ‘invariable’). The two metals are fixed together so that they cannot move separately. When heated, brass expands more than invar and to allow this, the strip bends with brass on the outside. This property & behaviour of metals with respect to heat makes the use of bimetal strips in devices called thermostats an optimal choice. However, functionally, the thermostats are meant to keep the temperature of a room or an appliance about constant. When the iron of the invar reaches the required temperature, the strip bends down, breaks the circuit at the contacts and switches off the heater. After cooling a little, the strip straightens, remakes contact and turns the heater on again. A near steady temperature results. If the control knob is screwed down, the strip has to bend more to break the heating circuit and must reach a higher temperature to bend sufficiently.
Connecting concepts: Why does a glass container break when hot liquid is poured into it? When hot water is poured into a thick glass tumbler the inside expands rapidly. The outside of the glass, however, does not expand at the same rate, because the heat takes some time to pass through the glass. (Glass is a poor conductor of heat). The result is that a strain is produced in the glass which makes it to crack. Nowadays, a type of glass known as Pyrex glass is used for making glass vessels. This material expands only slightly on heating, and so vessels made of such glass materials can hold hot liquids without cracking. Vitreosil and quartz are other substances having the same property as Pyrex. They are used in making cooking utensils.
The filaments inside electric-light bulbs must be connected to the outside wiring for the lamp to be used, and this means that the wires must pass through the glass. For this purpose a substance must be found for the filament which would expand to the same extent as glass to prevent cracking from unequal expansion. Platinum may be used, but it is expensive. An alloy of nickel and steel (45% nickel) has been found suitable which expands at the same rate as glass. It is known as platinite.
How do watches keep a constant time? Watches have a special kind of ring which continually executes periodic motion under the action of a hair spring what is called as balance wheel. The period of the oscillations of the balance wheel depends on its diameter. The seasonal change in temperature will cause the diameter to change and hence, its periodic time. The watch will lose time when the diameter increases and will gain time when the diameter decreases. The balance wheel is specially designed so that the diameter does not change with the change in temperature. The rim of such a wheel is made of a bimetallic strip and is divided in two parts: the more expanding metal on the outwards so that the wheel rim bends inwards. This compensates the outward expansion of the spokes. The decrease in temperature tends to decrease the diameter, the rim bends outwards and the diameter remains unchanged. Thus, any change in temperature does not change the period of oscillation and hence, watches show accurate timing…
Affect of heat on Liquids & Unusual behaviour of water: When water is cooled to 40C it contracts, as we could expect, but as it cools further down from 40C to 00C, it expands, surprisingly. Water therefore has a maximum density at 40C. At 00C where water freezes, a considerable expansion occurs and every 100 cm3 of water becomes 109 cm3 of ice; this accounts for the bursting of water pipes in very cold weather. Further cooling of the ice causes it to contract. The unusual behaviour of water is represented by the volume temperature graph in Fig (a) below.
The expansion of water between 40C and 00C is due to the fact that above 40C water molecules form into groups which break up when the temperature drops below 40C. The new arrangement occupies a larger volume and this more than cancels out the contraction due to the fall in temperature.
Freezing of ponds (Question asked in GS-pre-2011): The behaviour of water between 40C and 00C explains why fish survive in a frozen pond. The water at the top of the pond cools first, contract and being denser sinks to the bottom. Warmer, less dense water rises to the surface to be cooled. When all the water is at 40C the circulation stops. If the temperature of the surface water falls below 40C, it becomes less dense as a result of expansion behaviour of water and thus, remains at the top, eventually forming a layer of ice at 00C. Temperatures in the pond are then as shown in Figure (b above).
Heat passes from one object or place to another by three methods: (1) Conduction, (2) Convection, and (3) Radiation.
I) CONDUCTION: Conduction is the flow of heat through matter from places of higher to places of lower temperature without movement of the matter as a whole. When heat travels by conduction, it moves through a material without carrying any of the material with it. For example, the end of a copper rod placed in a fire quickly becomes hot. The atoms in the hot end begin to vibrate faster and strike neighbouring atoms. These atoms then vibrate faster and strike adjoining atoms. In this way, the heat travels from atom to atom, until it reaches the other end of the rod. But during the process, the atoms themselves do not move from one end to the other.
The substances through which heat can be easily transmitted are called good conductors of heat, for example, silver, copper, mercury, etc. Silver and copper are exceptionally good. All the metals are usually good conductors. The conductivity of a metal depends directly on the number of free electrons. The larger the concentration of free electrons the more is the conductivity. The substances which do not conduct heat easily are called poor conductors such as glass, wood, cloth, air, distilled water, wax, paper, clay, plastics, corks; stone is moderately good.
Liquids and gases also conduct heat, but very slowly. Water is a poor conductor. Example: A refrigerator needs to be defrosted every now and then, as the ice which forms on the freezer being a poor conductor, slows down cooling.
Ebonite and asbestos are the worst conductors of heat and are called insulators.
Air is one of the worst conductors (that is, one of the best insulators). This is why houses keep warmer in winter and cooler in summer if they have cavity walls, which consist of two walls separated by an air space, and double glazed windows.
Materials which trap air, such as wool, felt, fur, feathers, polystyrene foam and glass fibers are also very bad conductors. Some are used as ‘lagging’ to insulate water pipes, hot water cylinders, ovens, refrigerators and the (roofs and walls) of houses. Others make warm winter clothes.
Connecting concept: A stone floor feels cold to the bare feet, but a carpet on the same floor feels warm. Stone, being the better conductor as compared to the carpet and thus, conveys heat away from the feet more rapidly than the carpet.
II) CONVECTION: Convection is the flow of heat through a fluid from places of higher to places of lower temperature by the movement of the fluid itself. Example: A hot stove in the room heats the air around it by conduction. This heated air expands and so is lighter than the colder air surrounding it. The heated air rises, and cool air replaces it. Then the cooler air near the stove becomes warm and rises. This movement of heated air away from a hot object and the flow of cooler air toward that object is called convection current. The current of air carries heat to all parts of the room.
Convection occurs in liquids as well as in gases. For example, convection currents will form in a pan of cold water on a hot stove. As the water near the bottom of the pan warms up and expands, it becomes lighter than the cold water near the top of the pan. This cold water sinks and forces the heated water to the top. The convection current continues until all the water reaches the same temperature.
Connecting concepts: Examples of ‘Convection’ in Live Action: If some liquid in a vessel is heated at the top, the liquid there expands and remains floating on the denser liquid beneath. No convection current is set up in this case and hence, the only way in which heat can travel downwards under these conditions is by conduction only. In order to ensure that convection should play its role rather than the conduction that the heating element in electrical appliances such as geyser or an oven is fitted near the bottom. So is true of the refrigerators in which the freezer is always fitted at the top such that the cooled air would descend down to give place to warmer air and hence, ensure the cooling of the entire unit. The room heaters warm up our homes in winters through convection and thus, are called as convection heaters.
What is the origin of coastal breezes? Coastal breezes occur due to convection. As during the day time, the temperature of the land increases more quickly than that of the sea (because, the specific heat capacity of the land is much smaller). The hot air above the land thus, rises and is replaced by the colder air from the sea. A breeze from the sea thus, results. At night, the opposite happens. Since, the sea has more heat to lose and cools down more slowly. The air above the sea is warmer than that over the land and a breeze thus blows away from the land.
III) RADIATION: Radiation is the flow of heat from one place to another by means of electromagnetic waves. As we note that in conduction and convection, the motion of particles i.e. moving atoms or molecules actually transmit the heat, in case of radiations however, heat can travel and be transmitted even through a vacuum which of course, has no particles, atoms or molecules as such.
In any object, the moving atoms or molecules create waves of radiant energy. When this radiant energy strikes an object, it speeds up the atoms or molecules of that very object. Energy in the form of sun rays or radiations from the sun travels through space down to the earth and hence warm the earth’ surface.
Radiations of any kind and from any source possess all the properties of electromagnetic waves say, for example, they travel at the speed of radio waves and give interference effects. When they fall on any object, they are partly reflected, partly transmitted and partly absorbed. It is the absorbed part of such radiations that is responsible for raising the temperature of an object on which they fall.
What is Green-house effect? To be noted that the Radiations are emitted by almost all the bodies above absolute zero which consist mostly of infrared radiations consisting of pretty longer wavelengths but, light and ultraviolet radiations are also emitted by very hot bodies such as the sun. In short, it can be said that the very hot objects like the sun are a source of predominantly short wavelength IRs that eons ago carried the warmth of the sun down to the earth and became responsible for the phenomenon of natural green house effect on that primitive earth otherwise; the planet earth could have been a freezingly cold and a lifeless habitat at-18 degree centigrade.
Over the years unfortunately, due to wanton human activities, leading to environment pollution, the average temperature of the earth has been increasing abnormally leading to artificial or man made green house effect. As the short wavelength IR coming down from the sun when fall on any object on the earth, a part of them are being reflected back by the earth objects in the form of longer wavelength IR which are trapped up there in the atmosphere by the gaseous cloud of CO2 instead of allowing their escape back into the space and hence, an abnormal heating up of the atmosphere results. The term green house effect is thus given that the light and short-wavelength infrared radiations from the sun penetrate the glass of a greenhouse and are absorbed by the soil and plants inside it, raising their temperature. They in turn emit infrared radiation but, because of their (soil/plants) being relatively at low temperature as compared to the sun, the reflected or emitted IRs have a longer wavelength (less energetic) and thus, cannot pass out through the glass. The greenhouse thus, acts as a ‘heat-trap’ and its temperature rises. This is exactly what that is happening to our planet earth today as a consequence of what we call as “thermal pollution”.
Since, water vapors in the lower layers of the atmosphere exhibit the same ‘selective absorption’ effect as a greenhouse window actually does and prevent long wavelength infrared radiations emitted by the earth from escaping.
It has been estimated that if the combustion of fossil fuels and the resulting increase of carbon dioxide in the atmosphere should lead to a rise in the earth’s average temperature of only up to 3.50C, both geographical features and climates worldwide could alter dramatically as well as drastically putting every life on the planet earth virtually at the brink of disaster.
A substance combining with oxygen is always oxidized. All such combinations result in the evolution of heat energy. If the rate of reaction is slow, only heat energy & no light is given off and the process is called slow oxidation. But if oxygen combines with the other substance so rapidly that both light energy as well as heat is evolved, the process is known as combustion. In short, combustion is the burning of a substance in the presence of oxygen to produce both heat and light energy. Say for example, the rusting of iron is a slow oxidation whereas; the burning of wood however, is an example of combustion. Total amount of energy released by the oxidation of a substance is the same regardless of the rate of the combustion or oxidation process.
Before a substance can burst into flames it must be heated to a definite temperature. This minimum temperature is known as the kindling temperature of a solid or the flash point of a liquid.
As we know that there are three essential conditions for fire to occur i.e. heat which is required to let a substance reach a definite critical temperature before it bursts into a flame, Oxygen without which no fire can be possible and Fuel, a substance that actually burns. Given thus, a fire can be controlled say, even in a conventional manner by blocking any one of the above three conditions say, by controlling the heat (by cooling), fuel blocking (by cutting the fuel supply) or by de-oxygenation (by excluding oxygen). A fire extinguisher thus, actually works by removing or blocking at least, one of the above three conditions necessary for a fire to continue and invariably, makes use of the third option to quell the fire i.e. by blocking flow of oxygen to the fire.
The simplest fire extinguishers however, contain water, which when propelled onto the fire, cools it down. Unfortunately, water extinguishers cannot be used on electrical fires, as there is a danger of electrocution, or on burning oil, as the oil will float on the water and spread the blaze. Given this limitation of the simplest water using fire extinguishers,
Many of our domestic extinguishers thus, contain liquid carbon dioxide under pressure and usually work by cutting the supply of oxygen to the fire. When the handle of the extinguishing device is pressed down, carbon dioxide is released as a gas that blankets the burning material and prevents oxygen from reaching it. Dry extinguishers spray powder, which then releases carbon dioxide gas. Wet extinguishers on the other hand, are often of the soda-acid type; when activated, sulphuric acid mixes with sodium bicarbonate, producing carbon dioxide. The gas pressure forces the solution out of a nozzle, and a foaming agent may be added to produce foam.
Some extinguishers contain halons (hydrocarbons with one or more of their hydrogens substituted by a halogen such as chlorine, bromine or fluorine). These are very effective at smothering fires, but cause damage to the ozone layer in the atmosphere.
A heat engine is a device which changes the heat energy, obtained by burning of a fuel into kinetic energy. In an internal combustion engine, such as petrol, diesel or jet engines, the fuel is burnt in the cylinder or chamber where the energy changes occur i.e. heat energy is converted into the kinetic energy. This is not so in other engines, such as the steam turbine.
Petrol Engines: Petrol engines make use of the rapid expansion of heated gases to force a piston to move inside the cylinder or chamber.
In a four-stroke engine, the action takes place in the following manner:
On the intake stroke, the piston is moved down (by the starter motor in a car or the kick start in a motor cycle turning the crankshaft), so reducing the pressure inside the cylinder. The inlet valve of the cylinder opens and the petrol-air mixture from the carburetor is forced into the cylinder by atmospheric pressure.
On the compression stroke, both valves are closed and the piston move up, compressing the mixture.
On the power stroke, a spark jumps across the points of the sparking plug and explodes the mixture, forcing the piston down.
On the exhaust stroke, the outlet valve opens and the piston rises, pushing the exhaust gases out of the cylinder.
The crankshaft turns a flywheel (a heavy wheel), whose momentum keeps the piston moving between one power stroke and the next.
Most cars have at least four cylinders on the same crankshaft; each cylinder ‘fires’ in turn, in the order 1-3-4-2 giving a power stroke every half-revolution of the crankshaft. As a result the running is smooth.
Two-stroke engines work in mopeds and small boats. The cycle of operations is completed in two strokes. Valves are replaced by ports on the side of the cylinder which are opened and closed by the piston as it moves.
The efficiency of petrol engines is just about 30 per cent, which means that only 30 per cent of the heat energy supplied becomes kinetic energy: much of the rest is lost with the exhaust gases.
Diesel Engines: The operation of two and four-stroke diesel engines is similar to that of the corresponding, petrol-driven engines. Diesel (fuel oil) is used instead of petrol; however there is no sparking plug and the carburetor is replaced by a fuel injector.
Air is drawn into the cylinder on the down stroke of the piston and on the upstroke it is compressed to about one-sixteenth of its original volume (which is twice the compression in a petrol engine). This very high compression increases temperature of the air considerably (mechanical energy is changed to heat – just as the air in a bicycle pump gets hot when it is squeezed). Thus, when at the end of the compression stroke, fuel is pumped into the cylinder by the fuel injector, it ignites automatically. The resulting explosion drives the piston down on its power stroke.
Diesel engines, sometimes, called compression ignition engines, though heavier than petrol engines are reliable and economical. Their efficiency of about 40 per cent is higher than that of any other heat engine.
Jet Engines and rockets: Jet engines (or gas turbines) may be of different kinds. In a turbo-jet, an electric motor sets the compressor rotating to start the engine. The compressor is a kind of fan; it blades draw in and compress air at the front of the engine. Compression raises the temperature of the air before it reaches the combustion chamber. Here kerosene (paraffin) fuel is injected and burns to produce a high-speed stream of hot gas which escapes from the rear of the engine, so thrusting it forward. The exhaust gas also drives a turbine (another fan) which is on the same shaft as the compressor and which keeps it turning once the engine is started.
Turbo-jet engines have a high power-to-weight ratio, that is, they produce large power for their weight, and are ideal for use in aircraft.
Rockets like jet engines, obtain their thrust from the hot gases they eject when they burn a fuel. They can travel where there is no air, however, since they carry the oxygen needed for burning the fuel instead of taking it from the atmosphere as a jet engine does.
Steam Turbines: Steam turbines are used in power stations and on ships. Steam produced in a separate boiler enters the turbine and is directed by the stator (sets of fixed blades) on to the rotor (sets of blades on a shaft that can rotate). The rotor revolves and drives whatever is connected to it whether it is an electrical generator or a ship’s propeller. The steam expands as it passes through the turbine and the size of the blades increases along the turbine to allow for this. Rotary engines like the steam turbine run more smoothly than piston (reciprocating) engines do.
The Light
What is LIGHT? Without light we couldn’t see the world around us, yet we still don’t know exactly what light is!
We now however, know that light is a form of energy. Its speed can be measured and the way it behaves is very much known to us. We also know that white light is not a special kind of light – it is a mixture of all colors. We call this “the spectrum.”
We also further know that color is not in the objects seen – it is instead, in the light by which they are seen. A piece of green paper looks green because it absorbs all the colors except green, which it reflects to the eyes. Blue glass allows only blue light to pass through it; all other colors are absorbed in the glass. Sunlight is energy. The heat in rays of sunlight, when focused with a lens, will start a fire. Light and heat are reflected from white materials and absorbed by black materials. That’s why white clothing is cooler than black clothing.
But then, what is the nature of light? The first man to make a serious effort to explain the nature of light was Sir Isaac Newton. He believed that light is made up of corpuscles, like tiny bullets that are shot from the source of the light and thus, propounded the so called “corpuscular” theory of light. But some of the things that happen to light couldn’t be explained according to this theory.
So a man called Huygens came up with another explanation of light. He developed the “wave” theory of light. His idea was that light started pulses, or waves, the way a pebble dropped into a pool makes waves.
Whether light was waves or corpuscles was argued for nearly 150 years. The wave theory seemed to be the one that most scientists accepted. Then something was discovered about the way light behaves that upset this theory. Then, a very reasonable question arises:
Where does science stand today about the actual nature of light? Well, it is now believed that light behaves both as particles and as waves. Experiments can be made to prove that it is one or the other. So there just doesn’t seem to be a single satisfactory answer to the question of what is light. Yet for the time being, it is both particulate & wave in nature although, it is more of a wave nature than particles.
What are Electromagnetic waves? All forms of light radiations, whether of the visible spectrum or IR or UV are nothing, but electromagnetic radiations in nature, capable of behaving both as a wave and particle as such.
In strict Physics terms, Electromagnetic waves are coupled periodic, electrical and magnetic disturbances created by oscillating electric charges. These waves cover a wide range of frequencies, and have different effects. However they have certain common properties such as:
All waves of the electromagnetic spectrum travel at the same velocity, a quantity what is known as the speed of light. This speed is approximately 300,000 km per second (or about 186,000 miles per second) or 3 x 108 m/s.
They always show the phenomena of diffraction, interference as well as reflection and refraction.
They always obey the equation c = ƒλ; where c is the speed of light, ƒ is the frequency of the wave and λ is its wavelength. Since c is constant (for a given medium), it follows that larger the frequency of a wave, the smaller is its wavelength.
Because of their electrical origin and an ability to travel in a vacuum, they all are regarded as progressive transverse waves consisting of a combination of traveling electric and magnetic forces which vary in value and are directed at right angles to each other and to the direction of their travel.
X-rays were discovered in Germany in 1895 by Wilhelm Roentgen, and thus are sometimes called “Roentgen rays.”
They are penetrating rays similar to light rays. They differ from light rays in the length of their waves and in their energy. The shortest wave length from an X-ray tube may be one fifteen-thousand to one-millionth of the wave length of a green light. X-rays can pass through materials which light will not pass through because of their very short wave length and consequently high energy. The shorter the wave length, the more penetrating the wave becomes.
X-rays are produced in an X-ray tube. The air is pumped from this tube until less than one hundred-millionth of the original amount is left. In the tube, which is usually made of glass, there are two electrodes. One of these is called “the cathode.” This has a negative charge. In it is a coil of tungsten wire which can be heated by an electric current so that electrons are given off. The other electrode is “the target,” or “anode.”
The electrons travel from the cathode to the target (anode) at very great speeds because of the difference between the cathode and the target. They strike the target at speeds that may vary from 60,000 to 175,000 miles per second.
The target i.e. anode is either made up of a block of tungsten or a tungsten wheel, and it stops the electrons suddenly. Most of the energy of these electrons is changed into heat, but some of it becomes X-radiation, and emerges from a window at the bottom as X-rays.
Have you ever wondered how X-ray pictures are taken of bones in your body? The Ex-ray “picture” is a shadowgraph or shadow picture. X-rays pass through the part of the body being X-rayed and cast shadows on the film. The film is coated with a sensitive emulsion on both sides. After it is exposed, it is developed like ordinary photographic film. The bones and other objects that the X-rays do not pass through easily cast denser shadows and so show up as light areas on the film.
Today, X-rays play an important part in medicine, science and industry and are one of man’s most helpful tools…
All electromagnetic waves represent an electromagnetic spectrum. At the extreme left end or so called short end of this spectrum are present the gamma rays of extremely high frequency and short wavelength.
It is customary to describe these short wavelengths in units of Angstroms. One Angstrom is equal to 10-8 cm. Gamma rays are shorter than 0.03 Angstroms.
Next to the gamma rays while moving right of the EM-spectrum, comes the X-rays which are being classified as hard & soft X-rays respectively. The shorter X-rays described as “hard”, fall in the range 0.03 to about 0.6 Angstroms. The longer X-rays, described as-
“soft”, range from about 0.6 to about 100 Angstroms. These can penetrate the flesh but not bone. They are used in dental X-ray photography and to inspect welded joints and castings in industry.
X-rays then grade next in order and at the third place, into ultraviolet rays which extend to about 4000 Angstroms. They can be detected just beyond the violet end of the visible spectrum of sunlight or can be detected as the radiations from a filament lamp using fluorescent paper because, this paper can absorb energy from UV radiation and re-radiates them as visible light so that they can be seen to glow brightly.
Ultraviolet radiations even also cause teeth, finger nails, fluorescent paints and clothes washed in some detergents to fluoresce.
Ultimately after the UV, next in order at fourth place comes our very own visible spectrum also called as white spectrum.
For the visible light spectrum, it is customary to switch to a longer unit of length what we call as micron. One micron equals 10-4 cm: and one micron in terms of Angstroms thus equals, ten thousand Angstroms. The term micrometer can also be used in place of micron.
The visible light portion of the spectrum begins with violet at 0.4 microns. Colors then grade successively through blue, green, yellow, orange and red, reaching at the end of the visible spectrum at about 0.7 microns.
Next in the spectrum after the visible light, comes the infrared (IR) region consisting of wavelengths starting from about 0.7 microns to about 300 microns.
Our bodies detect infrared radiation by its heating effect. Anything that is not but not glowing (i.e. below 5000C) emits infrared radiations alone. Special photographic films detect infrared radiation and can take pictures in the dark. Infrared lamps help to dry paint on cars, and are used in the treatment of muscular complaints. A keypad (TV-Remote) for the remote control of a television set contains a tiny infrared transmitter.
Infrared rays at 6th.place then grade into yet still longer wavelengths as compared to IRs, called as microwaves. The microwave region is in between about 0.03 cm to about 1 cm.
Within the microwave region is the radar region beginning at about 0.1 cm and extending to about 100 cm.
Frequencies at which radar systems operate grade into television and radio frequencies, the latter extending into wavelengths exceeding 300 m. They are radiated from aerials (antennae) and used to ‘carry’ sound, pictures and other information over long distances. Long, medium and short waves (2 km to 10m) can bend (diffract) round obstacles and so can be received even if, say a hill or a tower is in their way.
At the same time, they are also reflected by the layers of electrically charged particles in the upper atmosphere (the ionosphere), thus, making long distance reception possible, although, the signals received far off may be weak because of their slight absorption by the ionosphere. At night radio reception is better because, the ionosphere is more settled in the absence of sunlight… (Question asked in GS-pre-2011).
Gamma rays: In radio-diagnosis and radiotherapy
X-rays: X-ray photography
UV-rays: In sterilization as well as in remote sensing including for distinguishing the fresh eggs from bad eggs.
IR-rays: In electronics & heating purposes as well as in remote sensing extensively. Noted that the remote controller of our TV sets makes use of IR-radiations. Besides this, IR radiations are also being made use of in the filming in the dark as well as drying paint on cars.
Microwaves: Radar & TV and radio transmissions in the capacity of Radar and Radio waves respectively. Also for cooking purposes in microwave ovens because, being EM waves, they do have a heating effect.
VHF (very-high-frequency) and UHF (ultra-high-frequency) waves: They have shorter wavelengths and need a clear, straight-line path to the receiver. Local radio and television use them. They pass through the ionosphere, and can be received only over a limited range. The earth’s curvature limits the range of reception. Satellites help to overcome this shortcoming.
Microwaves (with wavelengths of a few cm): They are used for radar and also for international (as well as national) telephone and television links. The international links are via geostationary communication satellites. Signals from the earth are beamed by large dish aerials to the satellite where they are amplified and sent back to a dish aerial in another part of the world.
Radar sets differ in design and purpose, but they all operate on the same general principles. All radars produce and transmit signals in the form of electromagnetic waves, that is, related patterns of electric and magnetic energy. Radar waves may be either radio waves or light waves. Almost all radar sets transmit radio waves. But a few called optical radars or laser radars send out light waves. When the electromagnetic waves transmitted by a radar set strike an object, they are reflected. Some of the reflected waves return to the set along the same path on which they were sent. This reflection closely resembles what happens when a person shouts in a mountain valley and hears an echo from a nearby cliff. In this case, however, sound waves are reflected instead of radio waves or light waves. The waves transmitted by radar have a definite frequency. The frequency of such a wave is measured in units called megahertz (MHz). One megahertz equals 1 million hertz (cycles per second). Radio waves have lower frequencies i.e. higher wavelengths than that of the light waves. Most radars that transmit radio waves operate at frequencies of about 5 to 36,000 MHz. Optical radars operate at much higher frequencies. Some generate light waves with frequencies up to 1 billion MHz. In many cases, radar sets designed for different purposes operate at different frequencies.
Microwaves are also used for cooking since like all electromagnetic waves, they also have a heating effect when absorbed.
Remote sensing makes extensive use of the ultraviolet, visible, infrared, microwave and radar portion of the spectrum.
While it has now been established more than once that light carries a dual nature and travels in a straight line in a homogenous medium, a property what we call as the “rectilinear propagation of light”. The straight line path traveling of light can be observed in the shafts of sunlight that come through a rift in the clouds or for that matter, in the sharpness of the shadows cast by an obstacle when placed in the light coming in from a small concentrated source.
At the same time, it has also been observed that whenever a ray of light travels from one medium to another and reaches at a surface that separates these two media say, for example, glass and water or water and air, some of the light from the said ray of light may be reflected from the surface, some of it may pass through the surface into the second medium and suffers refraction and of course, some of the light from the given light ray would be absorbed by the medium molecules present at the surface as well as within the medium. This is how we say that light always shows the characteristics of reflection, refraction & absorption and thus, accounts for several natural phenomena happening in the nature; some of which, the man has even capitalized on and may be reflected in what we call as everyday Physics…
I) REFRACTION of Light:
In a homogenous medium such as glass or water, light travels in a straight line path, but when light passes from one optical medium to another, it is deviated from its original path. This bending of light when it passes from one medium to another is called refraction of light. In short, refraction is nothing but a change in the direction of travel of a light when it travels from one medium to another say for example from air to water. Refraction is essentially a surface phenomenon. When a ray of light travels from a rarer medium to an optically denser medium (air to glass, for example), the ray bends towards the normal. Conversely, a ray passing from denser to a rarer medium (glass to air), it bends away from the normal. The ray which is incident normally, does not suffer refraction just because, the incident ray, the refracted ray and the normal at the point of incidence all lie in the same plane. Noted that the perpendicular to the very point where a ray of light strikes is called the normal and the angle thus formed between the incident ray and the normal is referred to as the angle of incidence.
Refraction can be explained in terms of change in the speed of light when it passes from one medium to another.
Examples OF REFRACTION in Live Action: Why does a pencil appear bent when placed in a glass of water?
To observe refraction, place a pencil in a glass of water and then look at the pencil from the top and one side. The pencil appears bent at the water surface. The light from the top part of the pencil comes directly to your eyes. The rays from the bottom part pass through the surface between the water and the air and thus suffer refraction and appear bent. Similarly refraction can also be seen in a pool of water whose apparent depth seems to be less than its actual or real depth and so is the case when we see something say, a coin lying at the bottom of a pool or pond that appears to be at a lesser depth than it actually does. The rationale being that the rays of light from the said coin at the bottom of a pool are refracted away from the normal at the surface of water as the same are passing from a denser (water) to a rare (air) medium and thus, when these ray of light enter the eye, they appear to come from a point little above than the actual point of depth.
Interestingly this phenomenon is again explained by the refraction of light that occurs in the atmosphere. Since, the atmosphere consists of a number of layers of air of different densities. The rays from the stars are continually being refracted, before reaching the eyes of the observer. Since the very fact that the layers of atmospheric air are not stationary and hence, the image of the star is always appear at different points after very shortly intervals. These different images of the star give an impression to a fixed observer that the star is twinkling. On the other hand, the planets being nearer do not twinkle because, the amount of light received from them is greater and so the variation is not very appreciable. Atmospheric refraction also accounts for the sun’s visibility for some minutes after its setting down in the horizon.
II) REFLECTION of Light: The reflection of light is governed by two most important laws of reflection:
First: That the angle of incidence (i) always equals the angle of reflection.
Second: That the incident ray (i), the reflected ray (r) and the normal all lie in the same plane meaning thereby that they all can be drawn on a flat sheet of paper.
The reflection may be a regular reflection or an irregular one. When light reflects from a smooth surface, all of its rays reflect in the same direction and thus, describe a regular reflection. On the other hand, when light reflects from a rough surface, the rays reflect in many different directions just because, the normals at all spots on the surface may be pointed in many different ways and hence, describe diffuse or irregular reflection. This is how exactly, you can see your image in a mirror, but can’t in a sheet of a writing paper!
This is to ensure that the ‘silver’ at the back of the glass should act as the reflecting surface to enable us to view ourselves in the said mirror. This “silvering” of a mirror is done by depositing a thin layer of metal called silver, but most commonly; this is done by an amalgam consisting of a tin foil and mercury followed by painting the surface. This does not however mean that the ordinary un-silvered glass surface can not reflect the light. It does reflect some and can be evidenced at night in a well lit room in which you might be sitting wherein, the interiors of your room can be seen reflected in the window panes. This further explains as why do the reflection of Sun in a lake is not extremely bright when the sun is overhead, but it gets too dazzling when the Sun is low in the sky. Just because, when the sun is overhead, the rays of light from it fall perpendicularly down in the lake with a very small angle of incidence such that barely 8% of the incident light is reflected. However, when the sun is low in the sky, the rays of light fall with considerably a large angle of incidence & almost entire light is reflected down in the lake.
TOTAL INTERNAL REFLECTION: Rationale behind the sparkling of diamonds & glitter of air bubbles in water:
When light passes at small angels of incidence from a denser to a less dense medium (from glass to air, for example) a strong refracted ray emerges into the less dense medium and at the same time, a weak ray is reflected back into the denser medium. Increasing the angle of incidence i.e. (the angle between the incident ray and the normal), increases the angle of refraction. At a certain angle of incidence, called the critical angle c, the angle of refraction is 900. For angles of incidence greater than c, the refracted ray disappears and all the incident light is reflected inside the denser medium. The light does not cross the boundary and is said to suffer internal reflection.
Total internal reflection causes a diamond to sparkle and air bubbles in water to glitter.
Connecting concepts: How does a Mirage occur? A mirage is caused by the total internal reflection of light at layers of air of different densities. In the desert, the sand becomes very hot during the day time and it heats the layer of air which is in its immediate contact. The layer of air that is in close contact with the land surface on being heated expands and thus, its density decreases. As a result, the successive upward layers are denser than those below them because of their being cooler than the layers below. When a beam of light traveling from the top of a tree enters a rarer layer, it is refracted away from the normal. As a result, at each surface of separation of successive layers of air, the angle of incidence increases and ultimately, a state is reached when the angle of incidence becomes greater than the critical angle between the two layers. At this position the incident ray suffers total internal reflection and is directed upwards. When this reflected beam of light enters the eye of an observer, an inverted image of the tree is seen and the sand looks like a pool of water to the said observer. On hot summer days, similar mirages are seen by motorists even on the roads.
Looming is a similar phenomenon: In this case the air closer to the ground being much colder than the air above, the rays are bent in such a way that they enter the observer’s eyes above the line of sight. The objects in the circumstances seem to be floating in the air.
Imagine a wanderer in the desert, dying of thirst. He looks off into the distance and sees a vision of a lake of clear water surrounded by trees. He stumbles forward until the vision fades and there is nothing but the hot sand all around him.
The lake he saw in the distance was a mirage. What caused it? A mirage is a trick Nature plays on our eyes because of certain conditions in the atmosphere. First we must understand that we are able to see an object because rays of light are reflected from it to our eyes. Similarly, the colour of an object is actually the colour of the visible spectrum that is being reflected by it towards us. Usually, these rays reach our eye in a straight line. So if we look off into the distance, we should only see things that are above our horizon.
Now we come to the tricks, the atmosphere plays with rays of light. In a desert, there is a layer of dense air above the ground which acts as a mirror. An object may be out of sight, way below the horizon. But when rays of light from it hit this layer of dense air, they are reflected to our eyes and we see the object as if it were above the horizon and in our sight. In reality, we are really “seeing” objects which our eyes cannot see! When the distant sky is reflected by this “mirror” of air, it sometimes looks like a lake.
Another similar mirage is created before our eyes when on a given hot day, as you approach the top of a hill, you may think the road ahead is wet. This is a mirage, too! What you are seeing is light from the sky that has been bent or so called reflected by the hot air just close to the pavement so that it seems to come from the road itself.
Mirages occur at sea, too, with visions of ships sailing across the sky! In these cases, there is cold air near the water and warm air over it. Distant ships that are beyond the horizon can be seen because the light waves coming from them are reflected by the layer of warm air and we see the ship in the sky!
One of the most famous mirages in the world takes place in Sicily (Italy), across the Strait of Messina. The city of Messina is reflected in the sky, and fairy castles seem to float in the air. The Italian people call it “Fata Morgana”, after Morgan Le Fay, who was supposed to be an evil fairy who caused this mirage.
Thus, the bottom line about the occurrence of mirages in nature is that in Physics parlance, they are nothing but the illusions created by the reflection of light…
Light pipes or optical fibres Light can be trapped by total internal reflection inside a bent glass rod and ‘piped’ along a curved path. A single, very thin fibre of very pure (optical) glass behaves in the same way. If several thousand such fibres are taped together a flexible light pipe is obtained that can be used (by a doctor or an engineer, for example) to light up some awkward spot. If necessary a second bundle of fibres carries back the image for inspection. A very recent use of optical fibres is in telecommunication, for carrying pulses of light from a laser which represent information such as telephone conversation, television pictures and computer data. An optical fibre has a much greater information-carrying capacity than a copper cable of the same thickness carrying an electric current, as well as being thinner and lighter.
III) DISPERSION of Light AND Physics of COLOURS: What is dispersion? Dispersion is simply the splitting of ordinary light (White light) or visible light in a spectrum of seven distinct colors starting from violet at one extreme end to the red at the other and may be acronymed as VIBGYOR.
An experiment was performed by Newton in 1666. He allowed a narrow beam of sunlight (which is white) to fall on a triangular glass prism. It produced a band of colours (called a spectrum) on a white screen. The effect is known as dispersion and Newton concluded that white light is a mixture of many colours of light, which the prism separates out because the refractive index of glass is different for different colours; the refractive index is greater for violet light (since it is refracted the most) and least for red light.
A pure spectrum is one in which the colours do not overlap, as they do when a prism alone is used. Therefore, a convex lens has to be used to focus each color distinctly.
A rainbow is one of the most beautiful sights in nature, and man has long wondered what makes it happen. Even Aristotle, the great Greek philosopher, tried to explain the rainbow. He thought it was a reflection of the sun’s rays by the rain, and he was wrong!
Sunlight or ordinary white light is really a mixture of all the colors. You’ve probably seen what happens when light strikes the beveled edge of a mirror, or a soap bubble. The white light is broken up into different colors. We see red, orange, yellow, green, blue and violet. An object that can break up light in this way is called “a prism.” The colors that emerge from a band of stripes, each color grading into the one next to it. This band is called “a spectrum.” A rainbow is simply a great curved spectrum, or band of colors, caused by the breaking-up of light which has passed through raindrops. The raindrops act as prisms here in this case.
A rainbow is seen only during showers, when rain is falling and the sun is shining at the same time. You have to be in the middle with the sun behind you and the rain just in front of you or you can’t see a rainbow! The sun shines over your shoulders into the raindrops, which break up the light into a spectrum or band of colors. The logic is that the sun, your eyes and the center of the arc of the rainbow must all be in a straight line!
If the sun is too high in the sky, it’s impossible to make such a straight line. That’s why rainbows are seen only in the early morning or late afternoons or when the sun is setting. A morning rainbow means the sun is shining in the east; showers are falling in the west. An afternoon rainbow means the sun is shining in the west and rain is falling in the east.
Superstitious people used to believe that a rainbow was a sign of bad luck. They thought that souls went to heaven on the bridge of a rainbow and when a rainbow appeared it meant someone was going to die!
Therefore, in Physics terms, we can say that rainbow is one of the most commonly observed examples of dispersion of light. It is formed when sunlight passes through myriad droplets of water suspended in the air after a shower. An arc of the spectrum colors is produced as a result of refraction and total internal reflection of rays of sunlight passing through raindrops. A bright, primary bow with violet on the inside of the arc is produced as a result of one total internal reflection. A dimmer, secondary bow with violet on the outside of the arc may also be seen at times. This is caused by two internal reflections within the raindrops. Noted that a rainbow could only be seen when the sun is low in the sky and behind the observer.
The Physics of Colour
When Sir Isaac Newton passed a beam of sunlight through a glass prism, he proved that sunlight is made up of colors. As the light was bent by the prism, it formed a spectrum.
Most people can see six or seven colors in the spectrum, but with instruments, more than 100 colors can be seen in it. But white light is really made up of three basic colors which are called “the primary colors.” They cannot be made from any other colors. The primary colors of light are orange-red, green, and violet-blue (RGB).
In the spectrum however, we can also see three mixed colors with the naked eye. These are called “secondary colors.” They are green-blue also called as Cyan (turquoise), yellow and magenta red (CYM). You can make these secondary colors by mixing other colors together.
Colors consist of wave lengths to which the human eye is sensitive. Insects and many other creatures respond to other wave lengths and see other colors too which human beings cannot see say, for example, bees can see and are particularly sensitive to ultraviolet light. In fact, light or color wave lengths are very short ranging somewhere from 350 nm at the shorter end to about 700 nm at the longer red end of the spectrum.
Noted that the so called Paint colors are actually the substances and they are exactly the opposite from light colors. The secondary colors in light are actually the primaries in paint. This simply means that in paint, the primary colors are yellow, green-blue (turquoise) or cyan and magenta red; and the secondary colors are orange-red, green and violet-blue i.e. exactly the other way round of the visible light colors. This is the reason that they are described as the artist’s colors.
What is a hue? A color that is brilliant and has no black or white paint in it is called “a hue.” Yellow, red, blue, green, etc., are all hues. In short, a hue is the property that gives a colour its name – for example, red, orange, yellow. A color that is mixed from a hue i.e. having any one of the above colors such as yellow, red, blue or green and a black and mixing them together what that we get is called “a shade.” For example, Deep brown is a shade. On the other hand, a color that is made with a hue and white is called “a tint”. Pink and ivory are tints. A color that is a mixture of pure hue, black and white is described as “a tone.” Tan, beige, straw and grey are nothing, but “tones”.
Red paint inside the can doesn’t look red – it looks black! Why? Because, where there is no light there is no color. For the same reason, not only do we are unable to see any color in a dark room, the color is not there! The color of an object depends on the material of the object and the light in which the object is seen. An orange-red sweater looks orange-red because, the dye with which the wool was treated reflects the orange-red part of the light and absorbs the violet-blue and green parts of the light.
As noted above, we see that the colour of an object depends on the colour of the light falling on it and the colour(s) it transmits or reflects back onto our eyes. A filter is made of glass or celluloid and lets through light of certain colours only. For example, a red filter transmits or reflects mostly red light and absorbs other colours, it therefore produces red light when white light shines through it. Thus, transparent objects are of that colour which they allow to pass through them. Opaque objects do not allow light to pass but are seen by the light reflected from them. A white object reflects all colours and appears white in white light, red in red light, blue in blue light and so on. A blue object appears blue in white light because, the red, orange, yellow, green and violet colours in white are absorbed and only blue is reflected. It also looks blue in blue light ( an object always reflect back its own color) but in red light it appears black since, no light is reflected and blackness indicates the absence of colour. A black object absorbs all colours and does not reflect any color wavelength.
Mixing of colors: In science red, green and blue are called primary colours (they are not the artist’s primary colours) because none of them can be produced from other colours of light. However, they give other colours when suitably mixed. The primary colours can be mixed by shinning beams of red, green and blue light on to a white screen so that they partially overlap.
The colors formed by adding two primaries are called secondary colours; they are as such called as yellow, cyan (peacock blue) and magenta. The three primary colours give white light when they are mixed together, as do the three secondaries. A primary colour and the secondary opposite to it in the colour triangle as shown above – such as blue and yellow – also give white light; any two colours producing white light are called complementary colors. In this sense, blue & yellow, green & magenta or red & cyan are all complementary colors.
Connecting concepts: How do ‘Paints’ & ‘Dyes’ exhibit their color? Noted that the paints or dyes are being made from the substances what we call as pigments. It is these pigments that actually give colour to paints and dyes by reflecting light of certain colours only while absorbing all other colours. However, since most of the pigments are impure substances and as such, they reflect not a single, but more than one colour. Essentially, when they are mixed together, it is always be the colour that is common to all of them that is reflected. For example, mixing blue and yellow paints together gives a green mixture because, blue paint reflects indigo and green (it neighbours in the spectrum) including as well the blue, while yellow paint reflects green, yellow and orange (for the similar reason of their being the neighboring colors). Since green is common to them therefore, only green is reflected by both.
Mixing of colored pigments is done through a process called as ‘subtraction’. Whereas, coloured lights are mixed together by a process of ‘addition’.
IV) “SCATTERING” of Light & Why is the sky blue?
Scattering describes what happens when light rays (indeed, all electromagnetic rays) strike atoms, molecules, or other individual tiny particles. These particles send the rays of light off in new directions – that is, they cause the rays to scatter. Our clear sky appears blue to us because, air molecules scatter more blue rays towards us than they do the other colours in sunlight. Since, violet and blue have shorter wavelengths than the other colors of the spectrum and are thus, scatter the most. But then, our eyes are not very sensitive to violet & secondly, the sun is itself relatively weak in violet light and thus, we see only blue as the scattered light and hence, our sky thus looks to us.
Why does a setting sun appear red to us? When the sun is near the horizon, it looks orange or red because; the light reaching us has lost so much of its other colours by scattering. Whereas, the Red light with its longer wavelength, scatters the least of all. Moreover, the path for the light to traverse is pretty long when the sun is at the horizon and it is only the light of a longer wavelength that could cover this longer path to reach our eyes. Therefore, blue, green and all other colors of the spectrum get almost scattered in the way and only red reaches the observer’s eyes.
V) “INTERFERENCE” of LIGHT:
In many cases, light can be thought of as being a wave with crests and troughs. When two light waves cross through the same spot, they interfere with each other – that is, they add to or subtract from each other. Suppose that whenever a crest of one wave passes through a particular spot and so does the crest of another wave. The two crests add together to give a larger crest. This process is what that we call as constructive interference. Since, the resulting wave has been formed due to addition of two separate light waves at the same spot; it certainly gives a brighter light than either of the two waves would have given separately. Taking this other way round suppose that instead of crests of two different waves (as given above) crossing at the same spot, there is a crest of one wave and a trough of the other wave that crosses together at the same spot. What will happen? The trough reduces the height of the crest and thus, diminishing the impact of light and hence, leaving the spot dim or even dark. This process is called as destructive interference.
Now the fact that light can interfere in such a way that it can either produce brightness or darkness, provides the strongest argument for the wave nature of the light. All types of waves can in fact, produce a pattern of constructive and destructive interference when they pass through two small, nearby openings.
A useful application of interference is in non-reflecting coating for glass. The surface is covered with a chemical film of just the right thickness to stop most of the light that would ordinarily be reflected and cause glare. When applied to a camera objective this improves the quality and brightness of the image by cutting out reflections from the various lens surfaces.
VI) “DIFFRACTION” of LIGHT:
The spreading of light coming from a source is called as diffraction in its simplest terms. Like interference, diffraction also results from the fact that light behaves as a wave. A light wave always tends to spread slightly when it travels through a small opening or around a small object or past an edge. Water waves also spread similarly, but the openings and objects that cause them to spread must be much larger than those for light. Due to the very small wavelength of the light waves, diffraction of light is thus not easy to detect.
Incidentally, the iridescent rainbow play of colors that you see when white light is reflected almost parallel to the surface from a gramophone record is due to the fact that the various wavelengths of light are diffracted by different amounts when reflected by the regularly spaced ridges with which the surface is covered. In fact, a surface covered by fine, evenly-spaced channels or ridges can be used as a substitute for the prism in a spectroscope.
Diffraction of light can be a problem. Suppose you attempt to see a very small object by using a high quality microscope. As you increase the magnifying power to see the object more and more closely, the object edges begin to blur. Each edge blurs because the light passing by the edge on its way to the eye diffracts.
However, diffraction serves a purpose when a device called a diffracting grating is used to study the colours in a light beam. The grating consists of thousands of thin slits that diffract light. Each colour in the light diffracts by a slightly different amount. The spread of colours can be large enough to make each colour visible. A grating used with a telescope can separate the colours in the light from a star, enabling scientists to learn what materials actually made up the stars…
VII) POLARIZATION of LIGHT:
What kind of waves are light waves – longitudinal, like sound or transverse, like waves in a rope, or a mixture of the two, like sea waves? The answer is that light waves are transverse. The proof comes from a study of what is called polarization of light.
Polarization involves the oscillations (regular variations in strength) of the electric fields that make up a light wave. The directions of the oscillations may be represented by arrows. In most of the light we see, the arrow point in many directions perpendicular to the ray’s path. Such light is unpolarized. If these arrows all point in one direction or just opposite it, the light is polarized. Suppose that when sunlight reflects from a road to you, its arrow point only to your left or right. You can block it by wearing sunglasses with polarizing filters. They block light oscillating left or right.
A manufactured polarizing sheet material, called Polaroid, has now replaced natural crystals for most of these uses and you must be familiar with the use of a Polaroid camera.
Polarized light can be used to find out just how the stresses are distributed in machine parts and this sounds to be a principle application of polarization property of light. In this case, a model of the machine part being scanned is made out of plastic and is then subjected to the kind of stress that the original part would have got in the actual use. When the same is viewed by a polarized light, the colored bands appear or observed which then reveal the exact stress pattern in the piece or that very machine part...
The Physics of Sound
What is sound? A sound is a sensation that is perceived by us through our ear, the organs of hearing. In Physics terms, it travels as a wave and for that matter, is essentially a wave of compression. As it passes through a medium say, the air, the molecules of the air crowd together and then draw apart. The sensation of hearing results when such waves strike the ear.
Going by its wave nature, it very much resembles with light except that sound always require a medium through which it travels, but light can travel even in vacuum too.
However, like all compressional waves, sound waves can travel through any medium whatsoever except vacuum. The medium of sound travel thus, may be a solid, liquid or a gas such as air.
Sound is produced by vibration or movement say for example, the thin leather of the drum when beaten or hammered, it moves up and down, i.e. it vibrates, producing sound. Similarly, a guitar when plucked with the fingers, the string starts moving rapidly to and fro and thus, producing sound.
Several objects move either so slowly or so rapidly that the ear is not able to hear the sound produced by them. A ruler fixed at one end and plucked at the other will move to and fro, but may not produce sound of much intensity which can be heard because, the vibration is too slow. In the same vein, a bat also squeaks, but the sound it produces is caused by such a rapid vibration that our ears cannot detect it. It is said that the human ear can hear vibrations which are between 20 and 20,000 vibrations per second (or between 20Hz and 20,000 Hz). These are the limits of audibility, and the upper limit decreases with age.
Connecting concepts: How does sound travel in waves? Every time a sound is made, there is some vibrating object somewhere. Something is moving back and forth rapidly thus, indicating that sound always starts with some vibrating object around. But the sound always requires a medium that could carry it from its source to the hearer. This medium thus could be anything from air, water and even the earth as such. A legend has to it that the ancient Indians used to put their ears down to the ground to hear a distant noise! In short, it can be said that no medium means no sound and this is the reason that the sound cannot travel in vacuum. Now the question that why does sound require a medium to travel through? The simple answer to this is that sound always travel in waves. As the vibrating objects somewhere vibrate, they cause the molecules or particles in the substance next to them to vibrate. Each particle then passes on the motion to the particle next to it and hence, the result is sound waves.
Since, the mediums in which sound travels can range anything from wood to water to air, obviously the sound waves will travel at different speeds in each of the given media. So, whenever do we raise a question and ask how fast does sound travel, we certainly have to ask: In what? Viewed thus, the speed of sound in air is about 335 meters per second that is approximately 750 miles per hour. But remember, this refers to the speed of sound when the temperature is at zero degrees centigrade thus concluding that as the temperature rises, the speed of sound will also rise. This is borne out of the fact that the speed of sound is always inversely proportional to the density of the medium i.e. more denser the medium is, less will be the speed of the sound in the said medium. As the temperature rises, the density of the air decreases, hence, more shall be the speed of the sound. In the same vein, the speed of sound is much faster in water than in air say for example, when water is at a temperature of say, 8 degrees centigrade, sound travels through it at about 1435 meters per second or so called at 3210 miles per hour. Similarly, in steel, the sound travels at about 5000 meters per second or at 11,160 miles per hour.
It must however be kept in mind that the speed of sound has nothing to do with its being louder or otherwise, weak. Nor the speed of sound has anything to do with its pitch; high or low. Instead, the speed of sound depends entirely on the kind of media through which it is traveling. Try a trick to test the speed of sound in air and water. Clap two stones together when you are standing in water. Now go under water and do the same exercise again i.e. clap these two stones together again. You will be amazed to know that how much better sound travels through water than through the air as such….
Infrasonic are the longitudinal waves of frequency less than 20 Hertz i.e. the sound waves having less than 20 cycles or vibrations per second are called as infrasonic waves. During earthquake, infrasonic waves pass through the earth’s crust. The human heart also vibrates at infrasonic frequencies.
Ultrasonic on the other hand, refer to those longitudinal sound waves that have their frequency greater than 20,000 Hz (20 kHz). The vibrations of certain crystals (quartz, zinc oxide, barium titanate, etc.) under the influence of an applied alternating voltage produce ultrasonic frequencies of well up to about 10 kHz. Such energy conversions – mechanical vibrations to electrical or vice versa – are example of the piezoelectric effect. The human ear cannot hear ultrasonics, but dogs, birds, bats and dolphins can hear them. Dogs respond to dog whistles pitched at 25,000 Hz, inaudible to human whistle blowers and bats emit squeaks at 100,000 Hz and guide themselves in the dark by listening for their echoes from nearby obstructions.
The uses of ultrasonics are many, ranging from cleaning silverware to catching burglars and drawing electronic portraits (sonograms) of unborn babies. Ultrasonic devices work by either delivering focused energy or detecting and measuring vibrations from an ultrasonic receiver.
In simple terms, it refers to the appearance of an electric potential across certain faces of a crystal when the same is subjected to a certain mechanical pressure. Similarly, the other way round, when an electric field is applied on to the same crystal, the crystal undergoes what we call as mechanical distortion. It was the famous Pierre Curie and his brother, Jacques who discovered this phenomenon for the first time in crystals called Quartz and Rochelle salt in the year 1880 and named the one as piezo-electricity. Piezoelectric effect in fact, occurs in many crystals or crystalline substances such as barium titanate or zinc oxide etc. The phenomenon of piezo-electric effect occurring in crystalline solids can be explained by the fact of there being a displacement of ions from the unit cells of a crystal when the same is subjected to a pressure or a force of compression. As the ions from every unit cell of a crystal are displaced following compression, it amounts to the electric polarization of the unit cells. Since a crystal is noted for having a well ordered and regular arrangement of unit cells in its structure, these effects ultimately accumulate thereby, causing the appearance of an electric potential difference between certain faces of the crystal. Therefore, when an external electric field is applied to such a crystal, the ions in each unit cell are displaced by electrostatic forces thereby, resulting in the mechanical deformation of the whole crystal.
Applications of Piezo-electric effect: Because of the unique capability and capacity of the piezoelectric crystals to convert the mechanical deformation having been induced say, by the application of the pressure on them into electric voltages and alternatively, their capacity to covert electric voltages into a mechanical motion, the piezoelectric crystals are being used extensively in the devices such as transducer, record playing pick up elements and the microphone & wrist watches etc…
Delivering focused energy: Ultrasounds can be aimed, focused and reflected almost like light beams. Specific ultrasonic frequencies can be used for loosening the plaque from teeth as well as for causing kidney stones to be pulverized without affecting the kidney itself.
Detecting and measuring ultrasounds: Ultrasonics can be used for guarding banks, offices and factories. In this application, an ultrasonic beam is aimed and reflected so that it criss-crosses a room several times and then strikes a detector, a kind of ultrasonic microphone. An intruder walking into the path of the sound immediately sets off a remote signal.
Another kind of detecting and measuring is done by using ultrasound as a kind of X-ray, without the risks of x-ray exposure. A beam of ultrasound travels directly through a homogenous substance, but if it reaches a different substance (at an interface), the beam is reflected and forms a sonogram. In this way, ultrasonic detectors can locate cracks or bubbles in metal castings. Similarly, and much more important, interior organs of the body and fetuses can be located and outlined. For example, an echocardiograph machine clearly shows the opening and closing of the valves of the beating heart. Thus, we say that an ECG makes the use of ultrasonics for heart diagnostics.
The functioning of SONAR: (Sound Navigation & Ranging) is a method of investigating the depth of submerged objects by transmitting or sending ultrasonic waves onto them and receiving back the reflected ultrasonic waves. The time taken by the wave to travel to the object and returning back gives the distance of the object from the ship provided the speed of the ultrasonic wave in the medium is known. The reflected wave helps in studying the nature of the surfaces as well.
Modern fishing vessels are equipped with fish dens which can ultrasonically find the depth of fish schools. The dolphin can detect an individual fish and also its own kind from a distance of 50m. Bats can fly in darkness because of a sonar system of their own.
The answer to this question lies in the speed of the sound. During thunderstorms we see the lightning long before hearing the thunder, even though they are both produced at precisely the same instant. Also when a gun is fired to start a race of horses, we seem the smoke far off before we hear the sound of firing. All this goes to show that speed of light is much greater than that of sound. The speed of sound does not depend on wavelength and amplitude of the wave. Rather the speed of sound is inversely proportional to the density of the medium this means that greater the density of the medium lesser shall be the speed of the sound in that very medium. Say for example, the speed of sound in hydrogen is 4 times that in oxygen because of the oxygen being 16 times denser than hydrogen. At the same time, the speed or the velocity of sound is also independent of the pressure. With the increase of temperature, there is a decrease in density of the surrounding air and consequently, the velocity of sound increases. Velocity of sound in moist air is greater than that in dry air (which has more density than moist air). The velocity of sound decreases if the wind is blowing in a direction opposite to that of the sound wave.
Solids are more elastic than liquids and gases; therefore speed of sound is more in solids, less in liquids and least in gases (even though solids have more density than liquids or gases generally). In air at 00C the speed of sound is 330 meters per second, in water it is about 1450 m/s, in concrete, it is about 5000 m/s, and in steel it is about 6000 m/s.
Just like the light or electromagnetic waves, sound waves also exhibit the similar properties such as reflection, refraction or diffraction etc. Each of these properties of sound waves is reflected in many natural phenomena around us say, for example:
I) Reflection of Sound waves and “Echo”:
When ripples traveling on water surface strike a wide obstacle, such as a floating board, a new set of ripples are observed to start back from the obstacle. The waves herein this case are said to be reflected from it. In a similar way, sound waves may be reflected from walls, mountains & the ground etc. Reflection of sound waves obeys the same laws of reflection as those of light. One of the most pronounced effects and live examples of sound reflection observed in nature is the production of so called an echo…
Today, when you have a question about anything in nature, you expect to get a true, scientific answer. But in ancient times, people would make up legends to explain things. The legend that the early Greeks had to explain an echo is very charming. Would you like to know it? Here it is.
There was once a lovely nymph called Echo who had one bad fault – she talked too much. To punish her, the goddess Hera forbade her ever to speak without first being spoken to, and then only to repeat what she had heard. One day, Echo saw the handsome youth Narcissus. She fell in love with him at once, but he did not return her love. So Echo grew sadder and sadder and pined away, until nothing was left of her but her voice! And it is her voice which you hear when you speak and your words are repeated.
That sad legend doesn’t really explain an echo, of course. To understand what causes an echo, you have to know something about sound. Sound travels at a speed of about 335 meters per second in the open air. It travels in waves, much like ripples made by a pebble thrown into water. And sound waves go out in all directions from the source, like the light from an electric bulb.
Now, when a sound wave meets an obstacle, it may bounce back, or be reflected, just as light is reflected. When a sound wave is reflected in this way, it is heard as an echo. So an echo is nothing, but a sound repeated by reflection.
Not all obstacles can cause echoes, however. Some objects absorb the sound instead of reflecting it. This means the sound doesn’t bounce back or reflected. There is no echo thus. But usually, smooth, regular surfaces, such as a wall, a cliff, a side of a house, or a vaulted roof, will produce an echo.
Did you know that clouds reflect sounds and can cause echoes? In fact, when you hear the rumbling of thunder, it’s because the first sharp clap is being reflected again and again by the clouds to which we may also call as reverberation.
Thus, we see that the “echoes” are produced by the reflection of sound from a distant hard surface such as a wall or cliff. You are able to hear an echo separately from the original sound because; there is always a certain time interval (at least, 0.1 second) between the sound and its echo. An echo is also produced in a small room, but the time interval between the echo and original sound is so short that it is impossible for the ear to recognize the two as separate sounds. Hence, you seem to hear the two sounds at exactly the same time, but the echo makes the original sound richer and louder.
It has been found by calculation that the least distance one has to be from an obstacle in order to hear an echo distinctly from the original sound is about 55 ft or around 17 m. For any distance less than this, the echo cannot be heard separately. Furthermore, if a room is filled with cushioned chairs and people all over it, one would not be able to hear the echo any more, even if the room is huge & large enough. This is because the sound has been partly absorbed by the cushions of the chairs, the wood of the furniture and the clothes and bodies of the people around…
Reverberation: The echo is sometimes heard more than once. This is possible in the still of the night or in a quiet & extremely silent valley where there are no other sounds. This continuous occurrence of echoes is known as reverberation. A clear example is that of the reverberation of thunder, which is due to the continuous reflection of sound between two or more clouds.
Just as a polished surface is the best reflector of light, so is true of the smooth & hard surfaces that are the best reflectors of sound. Therefore, in order to reduce the troublesome echoes in large halls, the walls may be roughened or covered with soft, thick or porous materials such as felt and heavy curtains which help absorbing the sound and stop the same from being reflected again and again. This arrangement is also used in broadcasting studios where echoes must be prevented in any case for a sound proof recording of programmes.
The Acoustics of Buildings: Reverberation is particularly noticeable in cathedrals and other large buildings where multiple sound reflections can occur from walls, roof and floor. Excessive reverberation in a concert hall is undesirable, as it causes music and speech to sound confused and indistinct. On the other hand, it is also not desirable to have no reverberation at all, for in its absence sound seems weak. The very characteristic of a building in relation to sound are called its “acoustics” and the pioneering work in this field was carried out in the early twentieth century by Professor W.C. Sabine of Harvard University. The most important property of a concert hall is its reverberation time that is defined as time taken for sound of a specified standard intensity to die away until it just becomes inaudible. This was very useful in planning new concert halls. Sabine related this “reverberation time” to the volume of a hall, the surface area of its walls, ceilings and so on and also the sound reflecting properties of these surfaces.
Noted that for certain special purposes, e.g. investigation of the properties of loudspeakers and other sound equipments, it is necessary to have rooms whose walls absorb completely all sound energy falling upon them. This is achieved by lining the walls, ceilings and floor with an-echoic wedges composed of glass fibre encased in muslin. In modern building practices, the spaces in the walls between the floors and ceilings are usually filled with some inelastic material which absorbs the sound instead of transmitting or reflecting it.
II) Refraction of Sound: Why do we hear sounds louder at night than during the day time?At night, some distant sounds, such as traffic sounds are often heard louder than they are during the day time. This has been attributed to what we call as the refraction of sound waves. Why? The scientific reason behind this is that after sunset, the air near the ground cools down more than the air above it and just because, the sound travels more slowly in cold air than it is in warm air therefore, the sound waves from the source of sound are refracted back towards the ground. During the day however, the upper air is usually cooler than that near the ground and sounds tend to travel upwards.
III) Diffraction and Interference in sound waves: Audible sounds have wavelengths from about 1.5 cm (frequency 20 kHz) up to 15 meters (frequency 20 Hz.) and so suffer diffraction by objects of similar size, such as a doorway 1 m wide. This explains why we can hear sound round corners.
Sound waves of the same frequency from two loud-speakers (supplied by one signal generator) produce a steady interference pattern. As you walk past the speakers you can hear the resulting variations in the loudness of the sound due to the waves reinforcing and canceling one another. The beats are produced when two notes of almost equal pitch are sounded together such that the loudness of the resulting sound rises and falls regularly and what that is being heard is called as beats. Noted that these beats are essentially produced as a result of the interference of sound waves and do render an added evidence in favour of the wave nature of the sound…
Sounds can be distinguished from one another by three different characteristics: pitch, loudness and quality.
(i) Pitch is that characteristic of a sound by which a high or shrill note can be distinguished from a low or a flat one. If the pitch is higher, the sound is said to be shrill and if it is less, the sound is described as flat. Pitch always depends upon the frequency i.e. higher the frequency of a given sound, higher shall be its pitch. All musical notes have a definite pitch. The voice of a woman is invariably of a higher pitch as that of a man.
(ii) Loudness is determined by the amplitude (energy) of vibration of the sound-making object. Thus, greater the energy carried by a sound wave, the greater is the intensity of that very sound.
The intensity of a sound, as is received at any given place is measured in a unit called as decibel. Some common sounds and their noise levels in decibels (db) are like this: Ordinary conversation= (60 db); Telephone bell & Alarm clock= (70 db); Heavy traffic= (100db); Rock music & Siren of an Ambulance= (120 db); Jet aircraft= (140-150 db) and Machine gunfire= (170 db).
The threshold of pain is about 120 db.
Decibel is a unit for comparing the levels of electric or acoustic power or in other words, to measure the loudness of sounds. It is in fact, more convenient to express them i.e. loudness of a sound as ratios rather than in their absolute magnitudes. Consider, for instance, the loudness of two sounds like the roar of a Ramjet engine and a barely audible human whisper. The absolute magnitude of total power that they produce in the air around them may be measured in watts. Therefore, the power of the former is 100,000 watts, while that of the latter is only 0.000,000,001 watt. Given thus, it can be said that the Ramjet engine roar is essentially, 100 trillion times louder than that of the barely audible, human whisper.
But in this regard, one thing that remains to be something of paramount importance is that when measurement extends over such a wide range, it is more convenient to use a ‘geometric ratio scale’ rather than using an ordinary arithmetic scale provided by the series of whole numbers 0, 1, 2, 3…, we conventionally use for counting. In the geometric ratio scale, numbers increase in a geometric fashion like, 1, 10, 100, 1000, 10,000 just like cells of the body divide… So if we denote them as powers of ten to designate the respective figures of 10, 100 and so on and represent them namely 100, 101, 102, 103, 104… respectively, we can use their power indices, 0, 1, 2, 3, 4.. to indicate their magnitude in a new unit called as “bel”. Since a decibel is one tenth of a bel, the same numbers, 1, 10, 100, 1000, 10,000... may also be denoted as 0, 10, 20, 30, 40... Decibels respectively.
Accordingly, if we reckon the loudness of the human whispers at the threshold of hearing as zero decibel, then that of the Ramjet roar (1014 times louder) than that of the human whisper, will be 140 decibels. Likewise, the strength of the human voice at the conversational level (104 times stronger) will be 40 decibels such that every increase of ten decibels meaning a ten-fold rise in its loudness.
In brief, decibel unit is to sound what degree of temperature is to heat.
(iii) Quality of a sound is that characteristic which distinguishes two sounds of the same loudness and frequency when coming from two different instruments. The quality of a musical sound depends on the “wave form” mostly. Thus, we can easily distinguish between the sounds of sitar and violin by their wave forms though they may be of exactly the same loudness and frequency.
The above effect is attributed to another characteristic of sound waves what we call as “Resonance”. Resonance in a practical sense is a particular case of forced vibration. When a body A is vibrating near another body B which has the same natural frequency as that of the A, then the body B will also start to vibrate of its own accord. As such, B is then said to vibrate in resonance or in tune with A.
In 1939, in USA, the ‘Tacoma suspension bridge’ collapsed because of mechanical resonance. A high-speed wind set the air over the bridge vibrating at a frequency close to the natural frequency of the vibration of the bridge. As a consequence, the bridge started oscillating and after several hours of a steady increase in amplitude of vibration due to resonance, the bridge eventually collapsed.
Soldiers are specifically trained to march in exact coordination with each other as well as in their steps so as to maintain a steady rhythm. If soldiers in formation march over a suspension bridge, the frequency of their steps may sometime match with that of the natural frequency of the bridge and could become a cause of a dangerous resonance. Therefore, by training, all soldiers are being trained as how to break the rhythm of their marching while crossing over a suspension (or otherwise, even a weak) bridge.
Windows rattle when a low flying aeroplane passes overhead. This however, occurs if the natural frequency of vibration of the window happens to be the same as one of the frequencies that make up the noise of the planes’ engine.
Similarly, when there is a loud explosion in a room, we find that the window panes in the room (especially if the windows are shut) vibrate strongly. If however, the explosion is large & loud enough; the windows might even be shattered into pieces. In the same way, when a bomb is dropped, houses which are at some distance from the spot fall flat on to the ground.
Interstingly, our radio also works on the same principle of resonance. The action of tuning in a radio set is made to adjust the value of the capacitance in a circuit until it has the same natural period of oscillation for electricity as that of the incoming signal. The small alternating e.m.f. thus set up in the aerial, is then able to build up similar e.m.f of large amplitude in the tuned circuit…
Doppler’s effect is the apparent variation in the frequency of any emitted wave, such as a wave of light or sound, as the source of the wave approaches or moves away, relative to an observer. The effect takes its name from the Austrian physicist Christian Johann Doppler, who first stated the physical principle in 1842. Doppler’s principle explains why, a source of sound of a constant pitch moving towards an observer seems higher in pitch, whereas a source moving away seems lower. This change in pitch of the sound can be heard by an observer listening to the whistle of an express train from a station platform or another train. The lines in the spectrum of a luminous body such as a star are similarly shifted towards the violet end of the spectrum if the distance between the star and the earth is decreasing and it shifts towards the red end if the distance is increasing. By measuring this shift, the relative motion of the earth and the star can be calculated. In short, the Doppler Effect describes that when the source, the medium and the observer are in motion and as the source and observer move together, the apparent frequency of the sound is higher than that is actually produced by the source. On the other hand, as they move apart from each other, the same gets lower.
Red shift is a shift in the wavelength of light emitted by a cosmic object toward the longer (red) wavelengths of object’s spectrum. Light acts like a wave, and its wavelength is the distance between crests of successive waves. The term red shift comes from the first detected shifts in wavelengths of light, but such shifts also occur at radio and other electromagnetic wavelengths. When a red shift occurs, all wavelengths are lengthened by the same fraction. A red shift is expressed as a percent-age increase over the normal wavelength. An example of a red shift can be seen in the spectra of quasars, extremely powerful sources of radio and light waves. A series of bright spectral lines caused by hydrogen appears in the spectrum of Quasar 3C 273 (Object 273 in the 3rd Cambridge catalogue of radio sources). The wavelength of each line of 3C 273 is 15.8 per cent longer than normal. Thus, the red shift of the quasar is 15.8 per cent. In essence, the bottom line of the red shift remains in the fact that If the star and the earth are moving closer to one another, more light pulses are received in a given time interval, and the colour emitted from the star appears to be shifted towards the violet end of the spectrum. When the distance between the earth and the star is increasing, the observed light is shifted towards the red end of the spectrum.
The pressure disturbances created by a plane flying below the speed of sound such that such pressure disturbances end up traveling faster than the plane itself and consequently, the sound of plane can be heard by the people on the ground as if it were coming towards them. May it be noted that the sound of a plane flying at a speed faster than the speed of sound, i.e. it is flying at supersonic speed cannot be heard on the ground until the aircraft has passed by. Nevertheless, after a supersonic aircraft has flown overhead, people on the ground may hear a kind of sharp “boom” or “bang” what we call as “sonic boom.”
Mach numbers are generally used to describe the speed of sound in air particularly just because, the speed of sound in air is not always the same owing to the presence of layers of the air of different densities at successive heights from the ground. Given thus, the speed of sound always depends on the altitude and temperature of the air. At sea level & at 150C, sound travels at a speed of about 1,190 kmph. With higher altitude, as the air up there in the atmosphere gets cooler and consequently, increases in its density, the speed of sound thus, decreases. At 12,000 meters, the speed of sound is about 1,060 kmph.
A Mach number is thus, found by dividing the speed of an aeroplane by the speed of sound at the altitude at which the plane is flying.
Flights faster than Mach 1 speed, the speed of sound is called supersonic and the flight as being supersonic flight. However, Flights slower than mach 1 is accordingly called as subsonic flights.
It is (sonic boom) caused by the shock waves from a plane: the plane moves as fast as the pressure disturbances it creates, and these disturbances ‘pile up’ in front and form a shock wave. At least two shock waves are created: one from the front and the other from the rear of the plane, but as the two waves, arrive close together, only a single boom may be heard.
The name “sound barrier” is actually a wrong way to describe a condition that exists when planes travel at certain speeds. A kind of “barrier” was expected sometime in the distant past by the scientists when planes reached the speed of sound i.e. must be flying at 335 meters per second that is the actual speed of sound in open air. Unfortunately, in the actual sense, no such barrier actually developed at all!
In order to understand this, let’s start with a plane traveling at an ordinary low-speed flight i.e. it may be flying below the speed of sound in air. As the plane moves forward, the front parts of the plane send out a pressure wave. This pressure wave is being caused by the building up of particles of air as the plane moves forward.
Now this pressure wave goes out ahead of the plane at the speed of sound whereas, the plane is flying at a lesser speed than the speed of sound. It is i.e. “the pressure wave” therefore, is moving faster than the plane itself, which, as we said above, is moving at an ordinary speed. As this pressure wave rushes ahead of the plane, it causes the air to move smoothly over the wing surfaces of the so called low speed flying plane.
Now let’s say the plane is now traveling at the speed of sound. The air ahead receives no pressure wave in advance of the plane just because, both the plane and the pressure wave are moving forward at the same speed. So the pressure wave herein this case builds up, but in front of the wing rather than ahead of it as noted above.
The result being, a shock wave, and this creates great stresses in the wing of the plane. Before planes actually flew at the speed of sound and even faster, it was expected that these shock waves and stresses would create a kind of “barrier” for the plane – a “sound barrier.” But no such barrier developed or actually found to have developed ever since aeronautical engineers were able to design planes & aircrafts to overcome it.
Incidentally, the loud “boom” what has also been referred to as “sonic boom” that is heard when a plane passes through the “sound barrier” is caused by the shock wave as described above particularly, – when the speed of the pressure wave and the speed of the plane are the same. A sonic boom may easily shatter window panes. The faster or lower a plane flies, the stronger the shock wave and thus, louder the sonic boom will be…
MAGNETISM AND ELECTRICITY
Human beings have known about magnets at least since ancient times. But they did not learn much about how magnets work or how to use them until several hundred years ago. The first known magnets were hard black stones called magnetite (an oxide of iron). No one knows when or by whom these stones were discovered, but the ancient Greeks knew of the lodestone’s power to attract iron. The Chinese discovered that a splinter of this rock, if hung by a thread, would always set itself in the north-south direction, they thus, used such an arrangement as a compass for guiding the ships. However, it is only recently that magnetism has become so widely applied and pervasive to many electrical devices that it looks really hard to imagine any electrical or electronic device without a magnet in its very constitution…
1. Magnets attract strongly certain materials such as iron, steel, nickel and cobalt, which are called ferromagnetic materials.
2. Magnetic poles are the places in a magnet to which magnetic materials, such as iron filings, are attracted. They occur in pairs of equal strength; in a bar magnet they are to be found near the ends.
3. If a magnet is supported so that it can swing in a horizontal plane it always zones to rest with its pole, the north-seeking or N pole pointing roughly towards the earth’s North Pole. A magnet can therefore, be used as a compass.
4. If the N pole of a magnet is brought near the N pole of a suspended magnet, the two poles can be seen to repel each other. Two S poles also repeal each other. By contrast, N and S poles always attract each other. The law of magnetic poles thus states that: Like poles repel, unlike poles attract.
Repulsion is the only sure test for polarity. Attraction shows either those two poles are unlike or one piece is unmagnetised.
The first permanent magnets were made of steel (an alloy of iron). Modern magnets are however, much stronger which principally are of two types called as Alloy magnets & Ceramic magnets. Alloy magnets contain metals such as iron, nickel, copper, cobalt and aluminum. Ceramic magnets are otherwise, made from powders called ferrites which are compounds of iron oxide with other metal oxides. They are essentially brittle in nature.
The phenomenon of magnetism has been applied in our daily lives in variety of ways. For example, the electromagnet forms the basis of the electric motor and the transformer. Magnetic materials have made advancements possible in the area of computer technology. Parallel and antiparallel regions of magnetization in computer memories serve as the units of the binary number system used in computers. Magnetic materials are also used in tapes and disks on which data are stored. Besides, these tiny magnetic units used in computers, large, powerful magnets are employed in many other technologies. Strong magnets make it possible for the magnetic levitation trains to float above the track. These trains are being christened as MAGLEV-trains will soon overcome the principal limitation of conventional wheeled trains associated with the high cost of maintaining precise alignment of the tracks so as to avoid excessive vibration and rail deterioration at high speeds. Thus, Maglev trains can provide sustained speeds greater than 500 km/h yet, limited only by the cost of power required to overcome wind resistance. The world’s first Maglev train to be adopted into commercial service was at Birmingham, England started way back in the year 1986, but later shut down in 1997 after having operated for about 11 years although, a Sino-German Maglev is still operating currently over a stretch of around 30 kms in Shanghai, China. Due to this there is no friction between the vehicle and the tracks to slow the train down.
Doctors use powerful magnetic fields in nuclear magnetic resonance imaging (MRI) for effective diagnosis of the body’s anatomy such that it has given birth to a new field of medical diagnosis called as Biomagnetism. Magnetism has also found its principal application in the construction of so called SQUID that refers to (Superconducting Quantum Interfering Devices) which are provenly the most sensitive magnetic field detectors ever invented by man. Essentially, an ultra sensitive detector of magnetic flux (determined in the units of Tesla), SQUID makes the use of a superconducting ring being interrupted by one or two Josephson junctions. SQUID has found its most important application in Magnetoencephalography (MEG) by the virtue of which the body can be probed to certain depths without the need for the strong magnetic fields associated with a technique like MRI etc. Therefore, MEG serves as a non-invasive method for recording the minute magnetic fields that emanate from the brain by making use of a device that is now called as Neuromagnetometer that is virtually a helmet like device placed around the head of a patient during diagnosis. In fact, the only essential condition to work with SQUID is that it requires for its working an extremely low temperature that is as low as about 4.2 Kelvin. Of late, SQUIDs have also been used to measure the small and minute magnetic fields generated by a baby’s heart after placing the same around the mother’s abdomen and thus, allows one to diagnose the foetal heart conditions. Particle accelerators use super conducting magnets to keep the accelerated particles focused and moving in a curved path.
The curious thing about electricity is that it has been studied for thousands of years – and we still don’t know exactly what it is! Today, all matter is thought to consist of tiny charged particles. Electricity, according to this theory, is simply a moving stream of electrons or other charged particles.
The word “electricity” comes from the Greek word electron. And do you know what this word meant? It was interestingly the Greek word for “amber”! You see, as far back as 600 BC the Greeks knew that when amber was rubbed, it became capable of attracting towards it some light bits of cork or paper.
Not much progress was made in the study of electricity until 1672. In that year, a man called Otto von Guericke produced a more powerful charge of electricity by holding his hand against a ball of spinning sulphur. In 1729, Stephen Gray found that some substances, such as metals, carried electricity from one location to another. These came to be called “conductors.” He found that others, such as glass, sulphur, amber and wax, did not carry electricity. These were called “insulators.”
The next important step took place in 1733 when a Frenchman called du Fay discovered positive and negative charges of electricity, although he thought these were two different kinds of electricity.
But it was Benjamin Franklin who tried to give an explanation of what electricity was. His idea was that all substances in nature contain “electrical fluid.” Friction between certain substances removed some of this “fluid” from one of the substance and placed an extra amount in the other. Today, we assuredly say that this “fluid” is composed of nothing, but the electrons which are negatively charged species of an atom revolving around the atomic nucleus in some discrete orbits.
Probably the most important developments in the science of electricity started with the invention of the first battery in 1800 by Alessandro Volta. This battery gave the world its very first continuous, reliable source of electric current and led to all the important discoveries of the use of electricity in the later part of the history…
While, today it has been conclusively proved that entire matter is made up of some tiny charged particles what we know now as electrons and it is the charge of these very electrons that what makes the matter charged and ensure the charging of the bodies that are made up of this charged matter. The charging of bodies is now also been understood in terms of the structure of the atoms composing the body. In its normal condition though, the atom is electrically neutral just because, the charge is balanced in an atom by the presence of an equal number of electrons and protons. However, it is usually the electrons that move from one place to another while the nuclei of the atoms remaining fixed in place. Therefore, how does a body become charged say, when a glass rod is rubbed with silk, some electrons from the rod attach themselves onto the silk, thus, making the glass positively charged – i.e. having lost electrons-
Lightning and thunder must have been among the first things about nature that mystified and frightened primitive man. When he saw the jagged tones of lightning in the sky and heard the claps and rumbles of thunder, he believed the gods were angry and that the lightning and thunder were a way of punishing man.
To understand what lightning and thunder actually are, we must recall a face we know about electricity. We know that things become electrically charged-either positively or negatively. A positive charge has a great attraction for a negative one.
As the charges become greater, this attraction becomes stronger. A point is finally reached where the strain of being kept apart becomes too great for the charges. Whatever resistance holds them apart, such as air, glass or other insulating substance, is overcome or “broken down”. A discharge takes place to relieve the strain and make the two bodies electrically equal.
This is just what happens in the case of lightning. A cloud containing countless drops of moisture may become oppositely charged with respect to another cloud on the earth. When the electrical pressure between the two becomes great enough to break down the insulation of air between them, a lightning flash occurs. The discharge follows the path which offers the least resistance. That’s why lightning often zigzags.
The ability of air to conduct electricity varies with its temperature, density and moisture. Dry air is a pretty good insulator, but very moist air is a fair conductor of electricity. That’s why lightning often stops when the rain begins falling. The moist air forms a conductor along which a charge of electricity may travel quietly and unseen.
What about thunder? When there is a discharge of electricity, it causes the air round it to expand rapidly and then to contract. Currents of air rush about as this expansion and contraction take place. The violent collision of these currents of air is what that we hear as a thunder. The reason thunder rolls and rumbles when it is far away is that the sound waves are reflected back and forth from cloud to cloud.
Since light travels at about 186,284 miles (299,795 kilometers) per second and sound at about 335 meters per second through air, we always see the flash first and then hear the thunder later.
The same amount of charge is in fact carried by each and every electron in any given substance or material as such therefore, the more electrons an object loses or gains (by rubbing say, for example), the greater shall be its positive or negative charge whatsoever. Charge is measured in coulombs (C), a coulomb being the charge on about 6 million million million (6 x 1018) electrons.
What is Current? The amount of charge passing per unit time through a conductor is called as current or in other words, Electric current is an electric charge in motion. In a solid conductor, such as a wire, the current consists of a swarm of moving electrons, while in certain liquids and in gases the carriers may include positively and negatively charged atoms as well. In addition, a beam of electrons or charged atoms for that matter may be made to travel in vacuum wherein, no conductor is being involved at all. Such a beam amounts to a current just as much as one in a wire or through a conductor. An electromotive force (e.m.f.) provided by a cell or a generator is essential to maintain a flow of current.
AC and DC currents: If current flow in a circuit is in the same direction it is called a direct current. On the other hand, if the electron’s flow is alternately backward and forward, it is then referred to as an alternating current.
The practical unit of current is ampere and one ampere is the rate of flow of one coulomb of charge per second which means a flow of 6.3 million electrons per second. Conventionally, the direction of electrical current is always opposite to the direction of flow of electrons.
Charge will have a tendency to move from one place to another if an electric potential difference (P.D.) exists between two places. Electrical charge will always flow from a higher potential to a lower potential. Potential difference between any two points is equal to the work done in moving unit positive charge from one point to another. Its unit is volt.
What is Electric Power? It is the rate of doing work and is measured in units called as watts. Every electrical equipment carries a label mentioning about the working voltage of the equipment or instrument as well as the power consumption by the same in watts. A 100 watts bulb for example, will give more light than a 40 watts bulb, but it will also use up more electrical power. Approximately, one unit of electricity is consumed by a 100 watt bulb in 10 hours (1 kilowatt hour = 1000 watts x 1 hour.)
Lamps
60,100 W
Fire
1, 2, 3 kW
Fridge
150 W
Kettle
2-3 kW
TV set
200 W
Immersion heater
3 kW
Iron
750 W
Cooker
8 kW
The flow of electric current through a conductor produces several useful effects. They include (1) heat, (2) light, (3) magnetism, and (4) chemical effects.
Heat – When electricity flows through a conductor, the resistance of the conductor converts some of the electric energy into heat energy. Certain electric devices such as cookers, heaters, and toasters, generate heat by passing current through special heating units. These units are made of materials that have a fairly high resistance to current. In an electric cooker, for example, electricity travels through coils of special wire in the heating unit. The resistance of the coils causes them to become red hot. The coil of wire is wound on mica or fireclay frame which serve as insulators and are able to withstand high temperatures.
Light – The atoms of all substances contain energy. Ordinarily, an atom has a certain energy level. If an atom absorbs additional energy, it moves to a higher energy level. Such an atom is called an excited atom. After absorbing the additional energy, the atom soon drops back to a lower energy level. When the atom drops back, it gives off its excess energy in the form of light.
As the current flows through a conductor, it always generates some heat energy. The production of this heat energy in fact, causes a conductor to give off light. And this is how an incandescent light bulb glows or gives off light to light our homes. As the Current flows through the filament of a bulb, it makes the filament much hotter in consequence of which some of its atoms are excited to higher energy levels after being heated too much. When the said excited atoms drop back to their original lower energy levels, the filament glows up and thus, give off light.
In fact, the principle that higher the temperature of the filament to which it is heated, the greater shall be the proportion of electric energy being changed to light. Since the lamps are being noted for producing a good amount of glowing light, it is necessary that their filament should be heated up to a considerable level of temperature. This further calls for a need to have the filament made of such a material that carries a pretty high melting point. And thus, this requirement is fulfilled by a material called as tungsten, a common material of which the filament is made of that is essentially a metal with a high melting point of (34000C). Why do the filament lamps are gas-filled containing either nitrogen or argon instead of air? Just because, the air being a mixture of gases should inevitably contain oxygen which could have easily caused the combustion and hence, of the filament no matter, the same is made of a material like tungsten. At the same time, the gases like nitrogen or argon reduce the evaporation of the tungsten which could have otherwise, condensed onto the bulb and have blackened it. Likewise, the compactness of the filament coil reduces cooling by convection currents being set in the gas. Unfortunately, a conventional filament lamp converts only 10% of the electrical energy being supplied to it. The rest 90% just gets wasted in the form of heat. This is the reason of their being less energy efficient and are increasingly being replaced by energy efficient fluorescent lamps…
They are three times as efficient as filament lamps and may even last for as much as over 3000 hours in terms of longevity as compared to a life span of hardly over 1000-hours of the conventional filament lamps. Although, fluorescent lamps cost more to install or in terms of their per unit price, but then, their running costs compensate this differential. Moreover, being the extended sources of light, fluorescent lamps also cause fewer problems of shadows.
Fluorescent lamps don’t carry any filament in them as against the conventional bulbs such that in these lamps, electricity is passed through vapors of different metals at very low pressures. The colour of the light produced by a given fluorescent lamp depends on the particular metallic vapor used say for example, mercury vapors will give green and sodium will give yellow light. They are thus, used for street lighting and named respectively as mercury or sodium lamps.
Similarly Neon lamps which gives bright orange-red coloured light, is used in advertising signs.
Noted that besides, giving out a coloured light source, a mercury vapour discharge tube also emits ultra-violet rays. When ultra-violet radiations which are essentially invisible rays, falls on certain minerals, they glow brilliantly with various colours. This phenomenon is called as fluorescence. Accordingly, the inside of a mercury discharge tube is coated with a mixture of various metallic powders which give out either a white or a tinted source of light. Some of these powders may contain beryllium compounds which however, are highly poisonous if they somehow enter a cut in the skin.
What is an electric Arc Lamp? The electric arc lamps are used in searchlights and as projection lamps, where an intensely concentrated source of light is required. It has metal electrodes surrounded by xenon gas. The temperature reached in the electric arc is in the region of 37000C. This is well above the melting point of metals and therefore, the main applications of the arc lamps are in electric furnaces and welding equipments.
A conductor carrying electricity is always surrounded by a magnetic field. For example, current flowing through a wire always sets up a magnetic field around the wire. If the said wire, carrying current and associated magnetic field along is wound into a coil, the magnetic field surrounding the wire is further strengthened. Such a coil is called a solenoid. If a soft iron rod is placed inside this solenoid, the current in the solenoid magnetizes the said iron too and consequently, even a much stronger magnetic field can be achieved. These are the discrete steps followed in the construction of electromagnets which have their application in almost every electrical equipment.
Most electromagnets in fact, consist of a solenoid wound around an iron core. The core, being magnetically soft, loses its magnetism when the current is switched off. An electromagnet is therefore, made to function as a temporary magnet such that its strength increases if either the current in the coil increases or the number of turns on the coil increases or for that matter, the poles are closer together.
There is also a small piece of soft iron used along with the electromagnet. This is called the armature that is attracted by the electromagnet when current flows through it.
Applications of Electromagnets: Electromagnets are used in as diverse applications as for lifting and transporting heavy steel items and for separating iron objects.
They are used in telephone receivers, electric motors, cyclotrons, etc.
And so far as the inherent magnetic effect of electricity is concerned, it finds its applications in constructing some measuring instruments such as the moving-coil galvanometer which has a high sensitivity and therefore, can be used to detect small currents of the order of even a hundred-millionth of an ampere.
The magnetic effect of the current and hence, the electromagnets find their one of the most pronounced uses in so called electric motors. An electric motor converts electrical energy into mechanical energy. It works on the principle that when an electric current is passed through a conductor kept in a magnetic field, a force acts on the conductor as a result of which the conductor begins to move (magnetic effect of current). Commercial motors convert about three-quarters of the electrical energy supplied to them into mechanical work. Electric motors are basic to the working of many gadgets.
Similar magnetic effect of electricity and hence, electromagnets is found in the working of a Loudspeaker. A loudspeaker as such, converts electrical energy into sound energy. In this case, the electric vibrations obtained from the microphone are first amplified and then sent to the voice coil in a magnetic field through the terminals. In the loudspeaker eventually, the electrical energy is transformed into mechanical energy of vibrations in a cone and then ultimately to sound.
Electromagnetism in artificial pace-makers of the heart: The typical effect called as Electromagnetic induction is the real principle behind the working of artificial heart pace-makers and has enabled many people to lead active and useful lives through the medium of such heart pacemakers. By this effect, a tired heart may be made to beat regularly at a controlled rate to suit the user’s requirements in which a “tick-or-silent” switch gives assurance of correct functioning and if the supply plug becomes accidentally disconnected, the generator automatically gives a warning buzz to alert the patient.
The phenomenon of electromagnetic induction being induced in an induction coil also finds its most important application in some wireless transmitters as well as in internal combustion engines used in motor cars wherein, an induction coil plays its part in producing a high tension spark by which combustion of fuel is initiated in an engine.
The transformer, an appliance by which the amplitude of an alternating current, but not that of the DC current, can be increased or decreased, also works on the principle of electromagnetic induction.
“Step-up transformers” are used in power transmission at the generating station, in television and wireless sets for stabilizing the required voltage.
“Step-down transformers” are however, used in electric bells, in radio-sets for valve heaters and in power sub-stations so as to step down the voltage before its distribution to the consumers.
Last, but not the least, the appliances such as Generators or dynamos, meant to convert mechanical energy to electrical energy also works on the principle of electromagnetic induction.
As we know that a telephone is a device by which speech (sound) can be conveyed from one place to another. Structurally, a telephone set is mainly made of a receiver, a transmitter (microphone); both are being connected by live wires. (Interestingly, today in a cell phone handset, we have both receiver & a transmitter combined in a single device and to call it a transceiver). A steady current is passed through the microphone by connecting it in series with a battery and the primary of a step-up transformer.
Thus, during a talk or conversation, when the sound waves enter the microphone, the changes in resistance will cause the current in the primary circuit to vary, in consequence of which a high voltage AC is set up in the secondary of the transformer with the same frequency as that of the original sound waves entering the microphone. The AC so set up is then transmitted along lines to the receiver at the other end where the electric energy is converted back into sound energy.
Although, it is possible to transmit messages without using a transformer, but only over short distances. For long distance transmissions since, the line resistance remains to be so great and massive that even the small changes of resistance in the microphone would have almost a negligible effect on current change and the sound also would be too weak to be heard. May it be noted that in all forms of electrical sound-reproducing as well as sound recording devices say, as in the telephone, the first step in the process is always the conversion of sound vibrations into an electric current.
In a tape-recorder, a plastic tape coated with magnetic oxide passes beneath the core of a coil that carries the varying ‘voice currents’ and so becomes permanently magnetized in the pattern of the original sound waves to reproduce the sound.
The tape that has been magnetized and having an impression of the original sound impressed on it is now run past another coil and the magnetic pattern having been impressed on the same is then changed by electromagnetic induction into a variable current once more. This current is now amplified and fed into the loudspeakers so as to convert it back into a sound. In fact, the magnetic pattern that has been impressed on a tape above may be ‘erased’ by passing or subjecting the same in between the poles of a magnet and hence, the same tape is now ready for reuse…
What kind of power is being supplied to homes? One of the main advantages of alternate current (AC) is that it can be easily and cheaply changed from one voltage to another by a transformer with a very little loss of energy. For this reason, the electric power is generally conveyed as AC current from power generating stations by means of high voltage overhead power lines what we call as GRID. From the grid, the said power is transmitted as AC to homes for domestic use just because; it can be transformed to very high voltage and transmitted over long distance with minimum power loss.
Connecting concepts: The Generation and Transmission of Electricity- from source to destination:
Electricity is generated in a typical power station at 11000 V and then stepped up to 132,000 V (132 (KV) by transformers located at the power station itself. It is then fed into the grid at this voltage (132 KV) and subsequently, stepped down in successive stages to about 33000 V and then to 6600 V at sub-stations in the neighborhood of towns and other areas where the energy is to be consumed.
What is a grid? A Grid is a system of overhead wires connecting large power stations in the country and feeding their power to any part of the country through the agency of power sub-stations. The main sub-stations are inter-connected to several generators in the country thereby, forming a complex system of interconnections what we call a grid system. The great advantage of grid system is that if there is a failure of power in one station, the power can be tapped or accessed from some other station in the said grid system. Big consumers like factories, etc. get their power at high voltage of 6600 V. For domestic consumers on the other hand, the voltage is stepped down to 220 V which is its effective value; the peak voltage value is however, 311 V.
In our homes, we receive supply of electric power through mains, either supported overhead on electric poles or by underground cables. One of the wires in this supply, usually with red or brown insulation on it, is called a live wire (or positive). Another wire among them with black or blue insulation on it is called a neutral wire or (negative wire). Potential difference between them is 220 volts (in India).
The so called neutral wire is earthed at the local substation and although, current does pass through it, but one may not get a shock if he/she happens to touch it accidentally anyway just because, the PD (potential difference) between it and the earth is zero.
At the meter-board in the houses, these wires go into the watt-hour meters through a main fuse – i.e. a double fuse, one each in both the wires. Then through a main switch they are connected to live wires-
Connecting concepts: What is a fuse? A fuse is a short length of wire made of some material with a low melting-point (often tinned copper) which melts and breaks the circuit immediately when the current through it exceeds a certain value. However, there could be two possible reasons for the flow of an excessive current through a circuit such as ‘short-circuits’ due to worn out of an insulation on connection wires and secondly, an overloaded circuit. A short circuit occurs when two wires, a negative and a positive, come into contact with each other. If it happens then the resistance of the circuit decreases considerably and consequently, the flow of the current increases enormously that amounts to heating up of the live wires to produce an electric arc. In the absence of a fuse, these would cause the wiring to become hot with the consequent risk of a grave fire.
in the homes. The main switch is also a double switch, one in each wire. When it is in “off” position both live wires are disconnected from the mains and any repair, etc. can be done easily. A second fuse of lower capacity than that at the main fuse in also connected in the live wire so that if there is any short circuit in this line, this fuse melts away rather than the main fuse. Various appliances in the house are then connected to these lives wires, each with its own independent switch. If switches were connected to the neutral wire, the sockets would remain alive even if the switches were in ‘off’ position. In order that each appliance gets an equal supply of voltage, they are connected in parallel with each other. This also ensures that when one is switched ‘on’ or ‘off’, others are not affected. The main line also carries a third wire (the earth wire) connected to the earth terminal. This is for safety purposes. In a big house, several pairs of lines start from the main switch and go into different portions of the house. Each pair of live wires carries its own fuse so that in case of a short circuit only the current of that portion of the house is cut off and other lines are not affected.
A three pin plug is usually used in the sockets placed in the complete ring circuit of the house as shown in the picture below. The earth pin goes to the top connections on all power sockets and is longer so that the appliance is earthed before being connected to the live wire. The earth pin is connected to the metal case of the appliance which is thus joined to earth by a path of almost zero resistance. If, for example, the element of an electric fire breaks or sags and touches the case, a large current flows to earth and ‘blows’ up the fuse. Otherwise, the so called, metal case would become ‘live’ and anyone touching it would receive a shock which might be too fatal especially, if they were ‘earthed’ say, while standing on a concrete floor or otherwise, holding a water tap.
Electrical energy used or consumed anywhere, is reckoned in kilowatt-hours (Kwh) as the units of consumption. The unit “watt” that remains written on electric lamps and on almost every other electrical appliance, is essentially an indicative of the unit of power or the rate at which electrical energy is being consumed by a particular appliance per unit time.
If we multiply the number of watts by the number of hours and divide the same by 1000 so as to convert the watts to kilowatts, then, what we have actually calculated is the electrical energy used up by us over a particular time period of course, in hours. For instance, a 60-watt lamp used for 100 hours, consumes an energy of 60 x 100/1000 = 6 kilowatt-hours, or 6 units (1-KWh= 1 unit of electricity consumption). The electricity bill is thus, generated in two parts: (a) a fixed amount which you have to pay monthly whether or not you have used any electricity during a particular month, for you have been away to vacations for two months and (b) the actual consumption of energy to be reckoned as a certain amount of bill on per unit basis of your energy consumption in a month or so…..
The Science of ELECTRONICS
Interestingly, the science of Electronics and the science of Electricity both deal with the study of electric current but with a very thin line of distinction between the two. While, the science of Electricity deals with the ‘current’ mainly, as a form of energy; an energy that can operate lights, motors or some other kind of equipments. Whereas, the science of Electronics treats the so called “current” as a means of carrying or transmitting information. Hence, a branch of Physics that deals with the study of electric current as a means of carrying information from one place to another or otherwise, is called as Electronics. A current or currents that carry information is/are called as signals. These signals may manifest themselves as being sounds, pictures, numbers or even letters and it is the ability inherent in a particular electronic device that makes it capable to change the behavior of an electric current into a respective signal form. Just because although, an electric current as such, when it is flowing steadily and in an unchanging form, it does carry some energy, but to have that energy that is flowing in the form of a current to be converted or somehow varied into a form so that it serves as a signal, it certainly requires a device to do that job and an electronic device does exactly that which may either change the behavior of a current into an analog signal form or into a digital signal for all the practical purposes of a given electronic device. Thus, we say that broadly, these signals in an electronic device or for that matter being converted by an electronic device may be classified into two categories, called as digital and analog signals.
While a digital signal is like an ordinary electric switch – it is either on or off. An analog signal on the other hand, can have any value within a certain range.
Analog signals are widely used to represent sounds and pictures because light levels and the frequencies of sound waves can have any value within a given range. Analog signals can be converted into digital signals and vice versa. For example, compact disc players (CD-players) convert digital sound signals on discs to analog signals for playback through loudspeakers.
It is rather very interesting to note that this ability or virtuosity on the part of an electronic device to convert an electric current into respective signal forms is conferred on it by virtue of its being made up of some specialized material what we call as semiconductors. These semiconductors eventually compose a structure that can be described as the basic structural and functional unit of any electronic device, called as a transistor. This is just analogous to a living ‘cell’ that constitutes the structural and functional unit of a complete human body. As there are billions of individual cells that compose the complete human body similarly, a complete electronic device is made up anywhere, from a hundred thousand to millions of such transistors.
It goes without saying that these transistors still operate millions of stereos, radios and television sets. But engineers can now put more than a hundred thousands of such transistors on a single chip of silicon that is even smaller than that of a fingernail. Such a chip forms an integrated circuit, designated as IC. Chips of this type can be wired together on circuit boards to produce a single electronic equipment that is not only much smaller in size and monetarily, less expensive – but also far more powerful – than ever before.
Today, Electronic devices are commonly used in a large number of applications that formerly relied on mechanical or electric systems for their operations. Examples may range from electronic controls in automatic cameras to electronic ignition systems in cars right up to the electronic control in domestic equipments, such as washing machines.
A transistor makes use of some specialized material without which the science of electronics is a nullity and could not have possibly come into being. This is what we call as semiconductors. These semiconductors are either made up silicon or germanium. Since, these semiconductors have made the IT revolution possible that is why any place being an epicenter of IT industry anywhere in the world today, is generally described and known by the epithet of ‘Silicon Valley’. Therefore, whether we talk of IT industry or electronic industry for that matter, it is essentially the semiconductors that lie at the base of it all. It was in the early 1940’s that a team of three American Physicists namely, John Bardeen, Walter H. Brattain and William Shockely became successful in making the first ever semiconductor diode. Later, in the year 1947, the same trio achieved a breakthrough in the science of electronics by inventing the first ever Transistor. Very soon, around the early 1950’s, the manufacturers began using transistors in the capacity of amplifiers in the devices such as hearing aids and pocket sized radios. Eventually, by the 1960’s, the semiconductor diodes as well as transistors had almost replaced the hitherto used vacuum tubes in most of the electronic gadgets. What followed next was the birth of microelectronics that significantly witnessed a far reduced size of the gadgets by virtue of the evolution of so called integrated circuits (IC’s) and their impregnation in the day’s electronic devices and equipments. The period in the evolutionary history of electronics that saw the emergence and use of semiconductor material in the construction of semiconductor diodes and the like is described as the SOLID-state ERA in the discipline of Electronics. Prior to this, it was the Vacuum tube era that took its birth when in the year 1904, a British scientist, J.A.Fleming built the first ever vacuum tube to be used commercially. It was a twin electrode tube so called a diode tube that could detect radio signals. Later in 1907, the US scientist, Lee De Forest went on to invent and patent a three electrode or so called a triode tube that incidentally became the first ever electronic amplifier to be used in long distance telephone lines being its principal first application. A triode tube as an oscillator was the joint invention of Lee De Forest and the American Radio pioneer, Edwin Armstrong in 1912-1913. Now since, a Vacuum tube to be used as electronic amplifier and oscillator was already in place and the birth of Radio broadcasting in the US was imminent and got off to a scintillating start in the year 1920, the date that also marked the beginning of electronics industry. The knowledge and development of vacuum tubes and their being used as an amplifier & oscillator between 1920-1950’s, now had made it possible to see the inventions of the gadgets and technology like that of TV, electronic computers, radar and films with sound effect etc. Next was the point of culmination of the vacuum tube era that factually was achieved when the first ever general-purpose electronic computer called ENIAC, an acronym for (Electronic Numerical/Integrator and Computer) came into being in the year 1946. ENIAC, astonishingly had over 18000 vacuum tubes in it which had made it a huge and awkward looking electronic machine, but then, it was also the fastest by over 1000 times as against the fastest ever non-electronic computer then in use…
Connecting concepts: What are semiconductors? Any material in terms of its conductivity can either be a conductor or an insulator. But there are some substances that are neither full conductors nor completely bad conductors. Instead, they have their conductivity lying somewhere in between the two. Such substances or materials are referred to as semiconductors. However, it is very interesting to note that semiconductor materials are insulators, if they are very pure especially, at low temperatures, but their conductivity can be greatly increased, by adding tiny but controlled amounts of certain impurities into them and that is done by a process known as doping. They can then be used to make devices such as diodes, transistors and integrated circuits. Some of the most common & familiar examples of semiconductors are the elements like Silicon, Germanium, Selenium & Carbon etc. Of these however, the first two have till date remained to be a bulwark of entire electronics industry. From the view point of doping of such materials, we may obtain two types of semiconductor materials, what we call as n & p type semiconductors. In n-type silicon, the silicon (Si) is doped with phosphorus (P) atoms which thus, increases the number of negative electrons that are free to move through the material. In p-type silicon, boron (B) atoms are used for doping the silicon material. They create gaps called positive ‘holes’, in a material and conduction occurs by electrons jumping from one hole to another. The effect is just as if positive holes were moving in the opposite direction, and for that reason, we usually consider that conduction in a p-type semiconductor is due to positive holes.
A diode can be made by doping a crystal of pure silicon (or germanium) so as to form a region of p-type material in contact with a region of n-type material, the boundary between them being called the junction. The connection to the p-side is the anode and that to the n-side is the cathode. If a p.d. is connected or applied so that the p-type region is positive and hence, repel the positive holes and the n-type negative to produce the same effect. Once done this way, the positive holes drift from P- to n- material across the junction. The diode conducts, has a low resistance and is said to be forward biased. If the p.d. is applied the other way round i.e. p-type region is negative and n-type is positive then, the electrons and holes are attracted to opposite ends of the diode and away from the junction so that there is no flow of charge across it. The diode in this scenario hardly conducts any signal or charge and thus, exhibits a high resistance and hence, is considered to be reverse biased.
In Chemistry, they are described as the members of the Carbon family which are also referred to be as group- 14 elements and besides these three; the group-14 does include the elements like tin & lead. Silicon is by far, the second most abundant element (27.7% by mass) on the Earth’s crust and in nature; it is present in the form of silica and silicates. In terms of the economic significance, silicon has long been a backbone of entire electronic industry though, it does find its applications and use as a component of ceramics, glass and cement etc. In contrast to this, Germanium exists in nature only in traces such that it has made silicon to be the ubiquitous constituent of all IC-chips today. Noted that it is only the ultra pure form of germanium & silicon that are used to make transistors as well as semiconductor devices. While all group-14 members are solids of which silicon exists as a non-metal and germanium is a typical metalloid. As silicon predominantly exists in nature as silica and thus, constitutes almost 95% of the earth’s crust as such. Silica, chemically known as silicon dioxide occurs in several crystallographic forms of which Quartz, cristobalite and tridymite are some of the most prominent crystalline forms of silica which are inter-convertible in nature at suitable temperatures. Silicon dioxide being a three dimensional crystalline solid network has in it each silicon atom bonded tetrahedraly to four oxygen atoms. Each oxygen atom then is again bonded to another silicon atom such that each corner is shared with another tetrahedron. Quartz is noted for having its extensive use as a piezoelectric material such that it has made possible to develop extremely accurate clocks, modern radio, TV broadcasting as well as mobile radio communications. Similarly, silica gel is used as a famous drying agent while, its amorphous form called as Kieselghur is used in filtration plants. Among the silicates that exist in nature, the best known example is that of a mineral what we call as Zeolite, chemically called as aluminosilicates. They have a great commercial significance from the view point of their being used as catalysts in petrochemical industries for cracking of hydrocarbons. ZSM-5 is a typical example of a zeolite that is used to convert alcohols directly into gasoline. Moreover, hydrated zeolites are being used as ion exchangers in softening of hard water.
A “pollution” that has served a grace for the mankind: THE SOLID STATE POLLUTION:
Isn’t it a paradox of the kind that a pollution of some kind has not only brought the creature comfort of man in its fold, but also revolutionized the way he used to live earlier…?
Though, it seems like this insofar as the prima-facie meaning of the word ‘pollution’ is concerned. But in practical sense, it certainly has had happened this way and birth of the science of Electronics, better called as solid state Physics, became possible as an offshoot of Physics during the first half of the 20th century. In true sense of the term, solid state physics is largely a study of crystalline solids. In short, they are bodies or substances whose individual atoms or molecules are located regularly in a three dimensional space forming a periodic array of points repeating itself endlessly in a regular rhythm. These points on being joined form a geometrical figure called a cube that exactly constitute a structural unit of the crystalline solid and is called a “cubic unit cell”. A cubic unit cell has its corners occupied by the atoms or molecules and this how a crystal is generated that is in strict solid state Physics terms is nothing, but a lattice of identical cubes endlessly repeated. Consider for instance, a crystal of Germanium that is known to be a brittle white metal unfortunately, acts as a perfect insulator in its pure and perfect state and so is true of its another relative, silicon. Viewed thus, a question arises as to how they have served to be a useful material on which virtually the entire world of electronics has come to be entrenched in? The answer lies in their being polluted with some another material so as to make them a conductor. This polluting of a material like silicon or germanium in the parlance of solid state Physics is called as doping and the material with which they are being doped is called as an impurity. This is how, the science of electronics has made it possible to convert a substance from being an insulator in its pure sate to the one of a semiconductor when being impured or doped with a foreign material such as arsenic, phosphorus or boron and thus, is obtained a substance what we call as semiconductor, a stuff of which the universal transistor is made. This is what to which one regaled researcher in the domain of electronics has described as the “solid state pollution- the backbone of modern electronics.”
Electronics has an all-important role in a country’s development process today. Electronics plays a catalytic role in enhancing production and productivity in key sectors of the economy, whether it relates to infrastructure, process industries, communication, or even manpower training. High-tech areas today depend heavily on electronics.
Electronics is conventionally classified into consumer, industrial, defence, communications and information processing sectors. In recent times, medical electronics and systems for transportation and power utilities have become important segments on their own.
Consumer electronics is the oldest sector of the field which began with the development of radio receivers after the invention of the triode. International competitiveness in this field requires constant innovation. This field has expended remarkable in the last few years with the development of items like compact disc (CD) players, digital audiotape, microwave ovens, washing machines, and satellite television reception systems. All these items, however, make use of advanced technologies and techniques of manufacturing such as semiconductor lesser and microwave devices.
Industrial electronics is oriented towards manufacturing products required by modern industry – process control equipment, numerically controlled machinery and robots and equipments for testing and measurement. Laboratories too require instruments of precision. This field has great potential for growth and development.
Advanced infrastructure in material sciences and sophisticated electronics are both relevant for the defence field where cost is withstand environmental pressures besides being precise and sensitive as well. Defence-electronics is strategic of course; it also has valuable spin offs to offer industry. Bharat Electronics Ltd. (BEL), a defence funded organization, has contributed much to the development of the transistor and television in India.
Communications electronics is a rapidly growing field with much scope for innovation and industrial application. Communications equipment have benefited immensely from the development of efficient semiconductor lasers, optical fibre technology, digital techniques, and powerful microprocessors.
Information technology, again, is clearly dependent on electronics. The integrated circuit is the base of computers which are, in turn, used for designing better very large scale integrated (VLSI) circuits, particularly communication systems, while fast and efficient communications lead to distributed computer networks giving one access to specialized data in a distant computer from one’s workplace itself.
In the medical field, electronics has made possible the ECG (Electrocardiogram) recorder as well as the NMR (Nuclear Magnetic Resonance) scanner besides other measurement equipment.
Although, a transistor still remains to be the structural and functional unit of every electronic device or gadget yet, as the technology advanced and fast paced ahead, the most discernible of all developments in the field of electronics happened is the ever increasing rise in the number of these transistors not only in terms of their sheer number in the whole electronic device, but also with respect to the area of their being packed within a device. This all has boiled down to a remarkable achievement that today, Engineers can put in or pack in over hundred thousand of transistors on a single small silicon chip that probably is even far smaller than that of a fingernail. Such a chip of the silicon on which hundred thousands of transistors have been embedded eventually forms a structure what we call as an integrated circuit (IC). Today, any electronic device is nothing but an assemblage of many of such IC’s that are being wired together on flat plates to constitute what we call as circuit boards. In our common PC, a component called mother board (MB) is nothing, but a circuit board and essentially constitutes the mother of a PC, as it integrates all other functional parts of a computer system…
In essence, the gist of the matter remains that except the primitive era of electronics marked by the use of vacuum tubes, the subsequent generation of electronic devices all had transistor at the heart of their construction and hence, the phase of miniaturization started wherein, same transistors came to be packed in a single and small silicon chip that defines the current 4th generation of electron devices and is being described as the (VLSI integrated circuits era) to the consequence that it not only has resulted in an extraordinary speed of the electronic gadgets, but considerably reduced their size as well. This is what that has once been prophecied by the famous Moor’s law!
One can reasonably say that the course of human development is based on the evolution of human capacity to manipulate objects in finer details. This capability has given birth to the knowledge era where information of the worlds is available just on hand-held computers. Rapid growth requirements in computing capabilities necessitated smaller and smaller transistors so that devices could shrink in size; this is popularly referred to as the Moore’s law which may be stated as, “number of transistors in an integrated circuit double in every 18 months”. This, so called ‘law’ is roughly followed all through the history of integrated circuits. To double in every 18 months, more transistors have to be packed in the same size or devices have to shrink. To what size can they further shrink to…? Well, ultimately to objects, as small as a few atoms or molecules. Thus, the technology of tomorrow needs objects being manipulated at the atomic level and hence, the next generation technology that will ensure the manipulation of the materials or matter at the atomic or even far below that is not far from us, but very much in offing and they have called it “Nanotechnology”…
The birth of Quantum Computer: As the science of electronics passed through the successive & progressive stages of evolution over the years right since its inception, the electronic gadgets of which it comprised the life and blood also underwent through similar stages of evolution alongside it. The result being, the stage of miniaturization wherein the principal functioning and constitution of an electronic gadget remained the same, but it has started becoming smaller in size. This miniaturization has been made possible by an ability on the part of the electronic engineers to pack in over a hundred thousands of transistors on a single silicon chip with the help of a technique what came to be known as “Photolithography” and hence, the construction of the so called ICs. The same miniaturization did happen to computer industry very magnificently considering the fact that the first ever computer named ENIAC, built by two US engineers, J.P. Eckert and J.W. Mauchly in the year 1946. ENIAC, an acronym for (Electronic Numerical Integrator And Computer) was a huge machine consisting of over 18000 on-and-off switches called vacuum tubes. It weighed around 30 tons and easily gobbled up 150 kilowatts of power, but could hardly store above 700 bits of information in its memory. Since the birth of ENIAC, the computer technology has advanced greatly by way of increasing miniaturization of its basic unit, the vacuum tube. To begin with, this miniaturization was achieved by replacing the pretty big vacuum tubes with the ubiquitous transistor and now by the silicon chip. Interestingly, the silicon chip even is not the last word in miniaturization and logically, the question arises: then, what next?
A big thanks here goes at the outset, to the new phenomenon called SUPERCONDUCTIVITY discovered by the scientists in certain materials that exhibit almost zero or nil electrical resistance at temperatures as low as absolute zero that is much below the freezing point of water such that it has now been possible to make a 1000 times small or compact electronic device called as ‘Cryotron’. A fair idea about the relative compactness of the computer devices, right when the first computer was made up to the present era when we can think of making use of a thing like cryotron to construct a computer system, can be had by considering the fact that a cubic foot of space could barely house a few hundred vacuum tubes while, the same space can comfortably accommodate a few thousand transistors, to several hundred thousand of silicon chips and of course, a several millions of so called cryotrons. Incidentally, the computer engineers are not content with and confining merely to the limit of cryotrons only, they are thinking far ahead of this even now so as to cross even the cryotron limit of miniaturization. And the birth of Quantum Computers is very much next on their agenda and probably, this will be the last and ultimate stage of miniaturization that a man could ever achieve in the world of electronics. The advent of quantum computing or quantum computers as such, will owe their origin to the upcoming, high end science of nanotechnology that could make it possible to manipulate and engineer the matter, both living and non living at the nanometer scale.
It may be noted that Nanotechnology and nanoscience came into prominence in the early 1980s with two major developments: the birth of cluster science and the invention of the scanning tunneling microscope (STM). This development led to the discovery of fullerenes in 1985 and carbon nanotubes, a few years later. In another development, the synthesis and properties of semiconductor nano-crystals was studied which led to a fast increasing number of metal oxide nano-particles of “quantum dots” and hence, the birth of so called “quantum computers” became an imminent possibility. Later, in the 1990s, with the invention of Atomic Force Microscope (ATM) coupled with the starting of the United States National Nanotechnology Initiative in 2000, the mission nanotechnology really got a shot in its arm for further development and expansion…
Connecting concepts: What is ‘superconductivity’?
Certain conductors do not resist the flow of electric current through them. They exhibit a phenomenon called as ‘superconductivity’. These conductors are called superconductors and as such, they are repelled by magnetic fields and hence, exhibit what we call as diamagnetism. It may however, be noted that Superconductivity is manifested by such superconducting materials only below a certain critical temperature as well as a critical magnetic field, which in fact vary from material to material. Before 1986, the highest critical temperature was 23.2 K (-249.80C/-417.60F) in niobium-germanium compounds. An expensive and somewhat inefficient coolant such as liquid helium was used to achieve such a low temperature. Later, it was found that some ceramic metal-oxide compounds containing rare earth elements were superconductive even at a higher temperature. In those instances, less expensive and more efficient coolant liquid nitrogen was used at 77 K (-1960C/-3210F). In 1987, the composition of one of these superconducting compounds, with critical temperature of 94 K (-1790C/-2900F), was revealed to be YVa2Cu3O7 (yttrium-barium-copper oxide). It has since been shown that rare-earth elements, such as yttrium, are not an essential constituent, for in 1988 a thallium-barium-calcium-copper oxide was discovered with a critical temperature of 125 K (1-1480C/-2340F).
The term ’nano’ comes from the Latin word for ‘dwarf’ and in scientific terminology refers to a nanometer (nm). One nanometer is a millionth of a millimeter and one-billionth of a meter (10-9m). A single human hair is around 80,000 nanometers in width.
The term ‘nanotechnology’ was used in 1959 by Nobel laureate physicist, Richard Feynman, though; it was Eric Drexler who actually worked with the minutest of particles – 1-100 nanometer in size. Nanoscience involves manipulation of materials at atomic, molecular and macromolecular scales, where properties significantly differ from those at a larger scale. According to Professor Norio Taniguchi of Tokyo Science University, “nano- technology” mainly consists of the processing, separation, consolidation, and deformation of materials by one atom or by one molecule.” In the 1980s the basic idea of this definition was explored in much more depth by Dr. K. Eric Drexler, who promoted the technological significance of nano- scale phenomena and devices through speeches and books.
According to a study published on November 1, 2007 nanoscale computing is all set to usher in personal and industrial data storage, Professor Albert Fate, who won the Nobel Prize for Physics in October 2007 with German Peter Gruenberg, has showcased the potential of a new generation of disk drives which promise to boost data storage by a factor of a hundred, The Magnetic Random Access Memory (MRAM) could essentially collapse the disk drive and computer chip into one, vastly expanding both processing power and storage capacity. MRAM potentially combines key advantages such as non-volatility, infinite endurance and fast random access-down to five nanoseconds read/write time- that make it a likely candidate for becoming the ‘universal memory’ of nanoelectronics.
Researchers at Hewlett – Packard have shown that nanoscale circuit elements called memristors, previously, built into memory devices, can perform full Boolean logic, like in computer processors. Memristor logic devices are quite smaller than devices made from transistors, enabling packing more computing power into a given space. Memristor arrays performing both logic and memory functions would eliminate transferring data between a processor and a hard drive in future.
Given in the box herein below, a quick glance at the evolutionary sequence of “Computeronics”…
First Generation (mechanical or electromechanical):
Calculators: Difference Engine
Programmable devices: Analytical Engine
Second Generation (Vacuum tube):
Calculators: IBM 604, UNIVAC 60
Programmable devices: Colossus, ENIAC, EDSAC, EDVAC, UNIVAC 1, IBM 701/702/650/Z22.
Third Generation (discrete transistors and SSI, MSI, LSI Integrated circuits):
Mainframes: IBM7090, System 360
Minicomputers: PDP-8, System 36
Fourth Generation (VLSI Integrated circuits):
Microcomputer: VAX, IBM System-1
4-bit: Intel 4004/4040
8-bit: Intel 8008/8080/Motorola 6800/Zilog 280
16-bit: Intel 8088/Zilog Z8000
32-bit: Intel 80386/Pentium/Motorola 68000
64-bit: x86-64/Power PC/MIPS/SPARC
Embedded: Intel 8048, 8051
Personal Computer: Desktop, Laptop, SOHO, UMPC, Tablet PC, PDA.
Fifth Generation: Presently the experimental or theoretical computing and artificial intelligence are making grounds. Some of them are Quantum computers, DNA computing, Optical computers….
Universal Computers Characteristics:
Calculation, Speed, Storage, Retrieval, Accuracy, Versatility, Automation, Diligence (no fatigue), File and Data Exchange, Networking, etc.
Algorithm is a repetitive routine which, if followed, is guaranteed to solve a problem assuming, of course, that the problem has a solution. An example is that of the simple routine for searching a word in a dictionary. One scans the list of words in alphabetical order till one finds it. The dictionary provides a set of possible solutions in some order and we test each word in succession till the required one is found.
The above so called an algorithm of our illustration is typical of any problem solving process that a computer has been conditioned to. There is a generator that produces possible solutions say for example, the word list in the above case– with a test procedure for recognizing the solutions. In the above case, the search is guaranteed to succeed in a reasonable time such as we can afford. Given thus, we can say that the above process is therefore, amenable to a computer operation even though the word search in a list involves handling of purely non-numerical symbols as the computer can manipulate them as readily as numbers. Therefore, all that need be done is to programme the computer to follow the prescribed algorithm or recursive routine of scanning possible solutions produced by generator process, test them in succession and terminate the operation when the solution is in hand.
The computer nevertheless, owes its success in accomplishing such computational or algorithmic tasks to its high speed operation so that so long as some way of performing a task can be specified, it does not really matter how laborious is the ensemble of operations if carried out by us as humans beings.
No doubt then that it is the lightning rapidity of a computer operation that makes even its most devious or tricky computational roundabouts quicker than many an apparent crow flight…
Artificial Intelligence (AI) is an attempt to make machines which amplify our cerebral power exactly as a diesel engine amplifies our muscle power. They are designed to emulate human mental faculties such as language, memory, reasoning, common sense, speech, vision, perception and the like. We are still far from making them despite the recent spectacular advances in computer technology. The reason is that human intelligence has not yet been sufficiently introspective to know itself. When at last we have discovered the physical structures and neurophysiological principles underlying the intelligence shown by a living brain, we shall have also acquired the means of simulating it synthetically.
What we have done so far is to make a very crude simulation of the living brain by assembling simple neural substitutes of digital kind like on-and-off mini-switches called transistors or silicon chips. As a result the comparison of networks of electrical hardware we call computers and of biological neurons we call animal brains is our most puzzling paradox. Though they are as closely allied in some respects as next-of-kin in others they are as far apart as stars and sputniks.
Consider, to start with, the resemblances. In performing a mental task, both resort to three principles of choice to decide the future course of action, of feedback to self-rectify errors, and of redundancy in coding as well as in components to secure reliability. These resemblances at first encounter appeared so striking as to earn the computers the nickname of electronic brains. But we now know better. Alas! The divergences diverge much more radically than the coincidences coincide. The most conspicuous departure is in respect of methods of storage, recall and processing of information. Because the methods of data storage, access and processing practiced in computer engineering today are very primitive vis-à-vis their infinitely more complex but unknown analogues of the living brain the computer has not yet come of age to wean itself away from the tutelage of algorisms. That is, it can only handle such task as can be performed by a more or less slavish follow-up of prescribed routines. The living brain, on the contrary, operates essentially by non-algorithmic methods bearing little resemblance to the familiar rules of logic and mathematics that are built into a computer.
It seems that the language of the brain is logically much “simpler” than another we have been able to devise so far. As von Neumann once remarked, we are hardly in a position even to talk about it, much less to theorize about it. We have not yet been able to map out the activities of the human brain in sufficient detail to serve as a foothold for such an exercise. For too long we have been content with the superficial analogy between the living brain and digital computers merely because both are hard wired networks wherein information is transmitted as digitally coded electrical signals. But in the living brain further transmission of information across the synapses (junctions of neurons), is no longer digital. It is not all-or-nothing response as it is in computers. It is finely graded response secured by subtly controlled release of neurotransmitters, neurochemicals, which have fast, but brief effects on very limited targets. In the past ten years it has been realized that an enormous number of substances are involved in synaptic communication deftly modulating the transmission process. These substances are of two kinds as what we know of them today called as neurotransmitters and neuromodulators.
Unlike the fast transmitters, most of the neuromodulators are slow acting, long lasting and often work at long range in very complex ways. Their discovery has given rise to a new concept of the nervous system as an array of hard-wired circuits constantly tuned by chemical signals totally unlike our telephone exchanges or computers. It is this chemical modulation that gives the living brain its flexibility and adaptability to long-term changes both internal and environmental. Recent research has revealed two main points of departure between the computer and the brain. First, the basis of the latter’s long-term memory is very unlike that of the computer. It seems to be the result of changes in protein synthesis brought about by neuromodulators. They do so by producing permanent changes at synapses and by regulating receptor density, which might alter a neuron’s sensitivity to transmitters. Second, we have to study the living brain at three different levels-the molecular, cellular and organismic. Unfortunately it is very difficult to switch from one level to another. Thus, it is very hard to link studies of the molecular constituents of membranes, for example, to whole animal behaviour. The more we see of one level, the more we block the view of others.
As a result, AI research seems to have now reached a blind alley. It cannot make a breakthrough by doing more of the same. It will take long to leap out of it. For we are yet very far from completing the ever-increasing catalogue of neuroactive substances modulating the signals at synapses let alone unraveling the complexity of their actions. It alone is likely to provide the unifying factor so far lacking in neurobiology. Its provision is the open sesame to AI…
There are a number of non-computational tasks like writing poetry, composing music, recognizing patterns, proving theorems, etc, which the computer is as yet unable to handle. Although in these cases too, problem solving processes or algorithmic routines can be devised but unfortunately, they offer no guarantee of success. Consider for example, Swift Lagado’s routine of writing “books on philosophy, poetry, politics and so on” described at length in his Gulliver’s Travels. No computer however rapid it might be could follow it through to success simply because; the number of possible solutions that the routine generates rises exponentially. It is the same with all other complex problems such as playing games, proving theorems, recognizing patterns, and the like, where we may be able to devise a recursive routine to generate possible solutions and a procedure to test them. The search fails because of the overwhelming bulk of eligible solutions that have to be scanned for test.
The only way to solve such non-algorithmic problems by mechanical means is to find some way of reducing ruthlessly the bulk of possibilities under whose debris the solution is buried. Any device, stratagem, trick, simplification, or rule of thumb that does so is called a heuristic. For example, printing on top of every page of a dictionary the first and last word appearing there is a simple heuristic device which greatly lightens the labour of looking for the word we need. “Assume the answer to be X and then proceed backwards to obtain an algebraic equation is another instance of heuristics for divining arithmetical riddles like those listed in the Palatine Anthology or Bhaskara’s Lilawati. Drawing a diagram in geometry and playing trumps in bridge ‘when in doubt” are other instances of heuristic devices that often succeed. In general, by limiting drastically the area of search, heuristics ensure that the search does terminate in a solution most of the time even though there is no guarantee that the solution will be optimal.
Indeed, a heuristic search may fail altogether. For, by casting aside a wide range of possibilities as useless. Consequently, a certain risk of partial failure, such as sometime missing the optimal solution, or even of total failure in a heuristic search has to be faced, nevertheless, resort to heuristics for solving problems more complex than computation and not amenable to algorithmic routines is inescapable.
A simple stick-on circuit can monitor heart rate and muscle movements as well as conventional medical monitors, but with the benefit of being weightless and almost completely undetectable.
It may soon be possible to wear your computer or mobile phone under your sleeve, with the invention of an ultra-thin and flexible electronic circuit that can be stuck to the skin like a temporary tattoo. The devices, which are almost invisible, can perform just as well as more conventional electronic machines but without the need for wires or bulky power supplies, scientists said.
The development could mark a new era in consumer electronics. The technology could be used for applications ranging from medical diagnosis to covert military operations.
The “epidermal electronic system” relies on a highly flexible electrical circuit composed of snake-like conducting channels that can bend and stretch without affecting performance. The circuit is about the size of a postage stamp, is thinner than a human hair and stick to the skin by natural electrostatic forces rather than a glue or through any other conventional adhesive.
“We think this could be an important conceptual advance in wearable electronics, to achieve something that is almost unnoticeable to the wearer. The technology can connect you to the physical world and the cyber world in a very natural way that feels comfortable,” said Professor Todd Coleman of the University of Illinois at Urbana-Champaign, who led the research team.
A simple stick-on circuit can monitor a person’s heart rate and muscle movements as well as conventional medical monitors, but with the benefit of being weightless and almost completely undetectable. Scientists said, it may also be possible to build a circuit for detecting throat movements around the larynx in order to transmit the information wirelessly as a way of recording a person’s speech, even if they are not making any discernible sounds.
Tests have already shown that such a system can be used to control a voice-activated computer game and one suggestion is that a stick-on voice box circuit could be used in covert police operations where it might be too dangerous to speak into a radio transmitter.
“The blurring of electronics and Biology is really the key point here,” said Yonggang Huang, Professor of Engineering at Northwestern University in Evanston, Illinois. “All established forms of electronics are hard, rigid. Biology is soft, elastic. It’s two different worlds. This is a way to truly integrate them.”
Engineers have built test circuits mounted on a thin, rubbery substrate that adheres to the skin. The circuits have included sensors, light-emitting diodes, transistors, radio frequency capacitors, wireless antennas, conductive coils and solar cells.
“We threw everything in our bag of tricks on to that platform, and then added a few other new ideas on top of those, to show that we could make it work,” said John Rogers, Professor of Engineering at the University of Illinois at Urbana-Champaign, a lead author of the study published in the journal Science…
Television, as you know, is a rather complicated process. Whenever such a process is developed, you can be sure, a great many people had a hand in it and it goes for back for its beginnings. So television was not “invented” by one man alone.
The chain of events leading to television began in 1817, when a Swedish chemist named Jons Berzelius discovered the chemical element “selenium.” Later it was found that the amount of electrical current selenium would carry depended on the amount of light which struck it. This property is called “photoelectricity.”
In 1875, this discovery led a United States inventor, G.R. Carey, to make the first crude television system, using photoelectric cells. As a scene or object was focused through a lens onto a bank of photoelectric cells, each cell would control the amount of electricity it would pass on to a light bulb. Crude outlines of the object that was projected on the photoelectric cells would then show in the lights on the bank of bulbs.
The next step was the invention of “the scanning disk” in 1884 by Paul Nipkow. It was a disk with holes in it which revolved in front of the photoelectric cells and another disk which revolved in front of the person watching. But the principle was the same as Carey’s.
In 1923 came the first practical transmission of pictures over wires, and this was accomplished by Baird in England and Jenkins in the United States. Then came great improvements in the development of television cameras. Vladimir Zworykin and Philo Farnsworth each developed a type of camera, one known as “the inconoscope” and the other as “the image dissector.”
By 1945, both of these camera pickup tubes had been replaced by “the image orthicon.” And today, modern television sets use a picture tube known as “a kinescope.” In this tube, an electric gun which scans the screen in exactly the same way, the beam does in the camera tube to enable us to see the picture.
Of course, this doesn’t explain in any detail exactly how television works, but it gives you an idea of how many different development and ideas had to be perfected by different people to make modern television possible.
Colour television pictures are created by additive mixture of the three primary colours in light. Light from the picture is focused on the colour TV camera by a lens. When the light reaches the camera, it is split into three beams by a set of special mirrors. Each beam of light is then passed through a filter to produce three separate beams of red, green and blue light. These beams are then directed to three separate camera tubes which produce signals for each of these colours. These are further processed to produce signals defining the brightness, line and saturation of the scene. These three signals are eventually broadcast on one carrier wave. A colour TV screen has thousands of tiny areas that glow when struck by a beam of electrons. Some areas produce red light, others produce green light, and still others produce blue light. When we watch a colour programme, we do not actually see each red, green or blue area. Instead, we see a range of many colours produced when the red, green and blue lights blend in our vision. We see white light when certain amounts of red, green and blue light are combined. The combining of the primary colours to produce white light makes it possible for a colour TV to show black-and-white pictures.
It has been found that an image of any object generally lasts on the retina of the eye for about one-tenth of a second after the object has disappeared. This very nature of the image effect on the retina has in fact, made possible the production of the so called “motion pictures.” Twenty-four separate pictures each being slightly different from the previous one are projected on to the screen per second and thus, give the impression of continuity of a motion picture. This very effect that the rapidly moving pictures on a TV screen create on our retina is referred to as “persistence of vision.” In a television receiver, twenty-five complete pictures are produced every second.
The traditional curved television screen, known as the cathode ray tube (CRT) that has remained familiar to all of us until the recent past as the well known face of our household television sets and may still be the ubiquitous electronic product in most of our homes. But owing to sweeping technological advancements in the electronic industry of late, it has slowly started giving way to a new breed of flat screen TV-sets which can virtually be hung like a painting on the wall without there being that back bulge which used to be a common morphological feature of our traditional TV sets. The birth of these flat screen, wall painting like TVs has been made possible because of fresh & rapid developments in the otherwise, quite an old technology what we call as liquid crystal display (LCD) technology. Before the advent of flat screen TV era, LCD technology has hitherto been used commonly in calculators and electronic watches. Probably, the very use of LCD technology in this new breed of TV-sets has given them the name of LCD- TVs in the market place.
Liquid crystals have been known for long. In 1988, Reinitzer, an Austrian botanist found that though the organic chemical called cholesteryl benzoate melted at a temperature of 1450C at which it formed a cloudy liquid which became clear only on further heating to about 1730C. The phase between the above two temperatures was given the name ‘liquid crystal’.
Infact, it has been noted that the liquid crystalline phase occurs in such solids that are noted for having two or more melting points. Liquid crystals also flow and take the shape of the container; however, the molecules of such a liquid tend to maintain their position approximately parallel to one another in arrays of one or two dimensions instead of remaining packed in a haphazard manner. ‘Nematic’ liquid crystal is the simplest type. The most important technological application of liquid crystals is in digital display. In the watch or calculator, two glass sheets are coated on their inner surfaces with a transparent film of electrically conductive material. In 1963 it was discovered that an electric change passing through a liquid crystal causes the molecules to realign and rechannel the light waves falling upon them.
Characters or images are rendered visible by modifying ambient light. Very little electricity is used by the LCD. However, it was found that as the display became bigger, the contrast between light and dark diminished. In the first generation of LCD screens, a picture is created by applying voltages to rows and columns of liquid crystals. The use of thin film transistor technology had led to what is known as active matrix LCDs. In this system, a tiny transistor is added to each cell to augment its charging power. Thus each picture element or pixel is switched on and off by its own transistor. Several thousand electronic elements are printed on a sheet of chemically heated glass, producing an electronic mesh, or ‘active matrix’. Advanced LCD has been utilized by the Japanese to become basic to their electronics strategy. Today, the LCDs are key components in portable computers, laptops, hand-held organizers, video games and virtual reality devices etc…
Light has for long been used as a medium for transfer of information – in the form of fires, lamps, light signals. But in the matter of long distance communications, light, as one perceives it, poses problems: it does not go round corners because, it always travels in a straight line, but corners need to be negotiated if long distances are to be spanned. Furthermore, atmosphere as a medium for transmission of light would be unsuitable as dust, fog, rain, etc., cause heavy attenuation owing to absorption, dispersion and diffraction, limiting the range of even line-of-sight transmissions to just a few kilometers. This serious limitation called for a device which could entrap light within itself and which could be flexed to negotiate physical obstacles. The other requirement was the availability of a source of light of high intensity. In 1970, a breakthrough was achieved when optical fibres with losses of the order of 20 decibels per kilometer were developed. The invention of lasers and light emitting diodes (LEDs) solved the problem of the light source. The continuing research activities finally gave birth to optical fibre based communication systems.
The velocity of light in various transparent media varies. A medium in which light velocity is lower is termed as denser than the other in which the light travels faster that may be called as the rare media. A light ray on crossing from a dense medium into a less dense medium bends away from the normal of incidence. If one keeps on increasing the angle of incidence (i.e., the angle between the ray of light and the normal of incidence), then a stage shall be reached when the light, instead of being refracted, will undergo reflection. This phenomenon is known as ‘total internal reflection’. If on such reflection, the reflected ray again meets a surface separating a transparent medium identical to the one described above, then it will again undergo total internal reflection. Optical fibres are manufactured to satisfy this condition enabling light to remain trapped inside the fibre while it travels along the longitudinal length of the fibre.
The basic building blocks of a fibre optic communication system; (a) a transmitter consisting of a light source and associated drive circuitry; (b) fibre encased in a cable to provide a practical method of spanning long distances with ease and safety; and (c) a receiver made up of detector, amplifier and other signal restoring circuitry.
Injection Laser Diodes (ILDs) and LEDs are the light sources used in optical fibre communication. These devices can be directly modulated by varying their input currents in consonance with the electrical signals to be transmitted. Alloys of gallium aluminum and arsenide are used for light sources operating in 900 nm band. For wavelengths of 1100 nm to 1600 nm, alloys of indium, gallium, arsenide, and phosphorus are used. Laser diodes are used for long distance communication since they produce coherent light beams that are highly monochromatic. Optical fibre is made up of a cylinder at the centre, i.e., at the ‘core’. The fibre is a thin hair like thread made of glass or plastic. The core is surrounded by a material layer, the ‘cladding’. The core has a specific refractive index which is considered while making a choice for a particular application. Due to the differences in refractive indices between the core and the cladding, the light signal is trapped inside the core (due to total internal reflection) even if the wire is bent. There are two types of fibre optics depending on the size of the core – the single mode fibre (with core of 10 micron diameter) and the multi-mode fibre (core of about 50 micron diameter). Composite optical fibre cables contain copper wires also for providing power to repeaters. Cables may be unarmoured or armoured with steel tape or steel wires. Different types of optical fibre cables are manufactured according to specific applications.
The photo-detector senses the luminescent power falling upon it and converts the power variation into corresponding varying electric currents. Semiconductor photo-detectors that are small in size and possess high sensitivity and fast response time are used: PIN photodiode and avalanche photodiode (APD) being the two types most generally used. Silicon, germanium and indium-gallium arsenide are the materials used in making photodiodes. Even though the fibre is extremely pure, there are minuscule impurities that absorb light, and light may be scattered due to variations in composition and molecular density. Bending also causes a slight loss of energy. Light pulses overlap while traveling, causing difficulty in separating them at the receiver end. Despite all this, fibre optics communications has distinct advantages.
Optical fibre communications systems provide the following major advantages over other conventional systems:
(a) Optical fibre has an extremely wide bandwidth, permitting several times larger volume of information than is possible through conventional means. It is possible to encapsulate several hundred fibres in one single cable.
(b) Repeater distances of the order of 80 km to 100 km are easily possible today.
(c) The higher channel capacity, larger repeater distances, and extremely how maintenance cost make the systems cost effective.
(d) Optical fibre cables are 25 times lighter in weights and at least 10 times reduced in size as compared to conventional cables. This makes their transportation and installation much easier. It also makes them attractive for airborne, ship borne and space applications.
(e) Optical fibre communications are free from radio frequency interference, and are immune to electromagnetic interference and electromagnetic pulses. This results in unprecedented high fidelity.
(f) The communication is almost error-free.
(g) It is well nigh impossible to intercept optical fibre communication without being detected.
(h) The system requires simple maintenance. Very high reliability is achieved.
(i) Power consumption is extremely low.
The story of the invention of the telephone is a very dramatic one. (No wonder they are able to make a movie about it!) But first let’s make sure we understand the principle of how a telephone works.
When you speak, the air makes your vocal cords vibrate. These vibrations are passed on to the air molecules so that sound waves come out of your mouth, that is, vibrations in the air. These sound waves strike an aluminum disk or diaphragm in the transmitter of your telephone. And the disk vibrates back and forth in just the same way the molecules of air are vibrating.
These vibrations send a varying, or undulating, current, over the telephone line. The weaker and stronger currents cause a disk in the receiver at the other end of the line to vibrate exactly as the diaphragm in the transmitter is vibrating. This sets up waves in the air exactly like those which you sent into the mouthpiece. When these sound waves reach the ear of the person at the other end, they have the same effect as they would have if they came directly from your mouth!
Now to the story of Alexander Graham Bell and how he invented the telephone. On June 2, 1875, he was experimenting in Boston with the idea of sending several telegraph messages over the same wire at the same time. He was using a set of spring-steel reeds. He was working with the receiving set in one room, while his assistant, Thomas Watson, operated the sending set in the other room.
Watson plucked a steel reed to make it vibrate, and it produced a twanging sound. Suddenly Bell came rushing in, crying to Watson: “Don’t change anything. What did you do then? Let me see.” He found that the steel rod, while vibrating over the magnet had caused a current of varying strength to flow through the wire. This made the reed in Bell’s room vibrate and produce a twanging sound.
The next day the first telephone was made and voice sounds could be recognized over the first telephone line, which was from the top of the building down two flights. Then, on March 10 of next year, the first sentence was heard: “Mr. Watson, come here, I want you.”
A basic telephone set contains a transmitter that transfers the caller’s voice; a receiver that amplifies sound from an incoming call; a rotary or push-button dial; a ringer or alerter; and a small assembly of electrical parts, called the antisidetone network, that keeps the caller’s voice from sounding too loud through the receiver. If it is a two-piece telephone set, the transmitter and receiver are mounted in the handset, the ringer is typically in the base, and the dial may be in either the base or handset. The handset cord connects the base to the handset, and the line card connects the telephone to the telephone line.
These are all methods for transmitting text rather than sounds. These text delivery systems evolved from the telegraph. Teletype and telex systems still exist, but they have been largely replaced by facsimile machines, which are cheaper and better able to operate over the existing telephone network. The teletype, essentially a printing telegraph, is primarily a point-to-multipoint system for sending text. The teletype converts the same pulses used by telegraphs into letters and numbers, and then prints out readable text. It was often used by news media organizations to provide newspaper stories and stock market data to subscribers. Telex is primarily a point-to-point system that uses a keyboard to transmit typed text over telephone lines to similar terminals situated at individual company locations. Facsimile transaction now provides a cheaper and easier way to transmit text and graphics over distances. Fax machines contain an optical scanner that converts text and graphics into digital or machine-readable codes. This coded information is sent over ordinary analog telephone lines through the use of a modem included in the fax machine. The receiving fax machine’s modem demodulates the signal and sends it to a printer also contained in the fax machine itself.
‘Wireless’ is once again at the forefront of communication systems. Today people are on the move constantly; the room-bound telephone is not of much use to these mobile people. The cordless telephone was the first step in giving mobility to the telephone. Now cellular mobile telecommunication technology has been developed.
In this system, the total area of operation is divided into a network of small areas called cells, varying in size from 0.5 to 40 km diameter. Each cell has a trans-receiver (called the ‘cell site’ or the ‘radio base station’ to transmit and receive calls within that cell. A subscriber moving from one cell to another has his call transferred to the transreceiver of the next cell without a break in the call.
The above so called “cells” are hexagonal in shape: this is to suit the modular design of the network in case of expansion of the service area requiring more cells to be added. However in reality, the shape, size, location, etc., are dependent on other parameters like radio pattern of the transreceiver, signal strength, topography, and radio interference due to multipath reflection by objects. The mobile switching centre (MSC) links the cells together through a computer. Microwave or digital land-links connect the cells to the MSC which is also linked to the public telephone network.
A call placed on a cellular phone is directed to the nearest cell site from where it is directed to the MSC which, in turn, signals all cells sites to locate the person called. On locating the person, the MSC allocates a voice channel to connect the two users.
Pocket-size mobile telephone sets are now available because of miniaturization and high-power components. Cellular technology allows the same frequencies to be reused in more than one cell by skillfully manipulating the location and size of the cells. The cellular technology started as analog design but gradually, digital transmission has increased the utilization potential of frequency spectrum up to seven times than that allowed by analog transmission.
There are several technologies available for mobile telephones: TACS (Total Access Communication System), CT-2, NMT (Nordic Mobile Telephone) AMPS (Advance Mobile Phone System), and PCN (Personal Communication Network). The GSM (Global System for Mobile Communications) technology has been chosen by India. It is digital system operating in more than 70 countries, and has the facility of ‘international roaming’, i.e. automatically locating users wherever they are within the GSM network on being called. GSM works in the 900 MHz band with frequency separation of 200 KHz. Time divisions multiple access (TDMA) is the base of transmission. It enables various users to share a transmission channel as equal amount of different time slots can be allotted periodically and sequentially. The GSM technology allows data service and protection against eavesdropping, besides future expansion of the network.
India is divided into 20 circles service areas and four metro cities for the cellular mobile telephone service.
Global System of Mobile (GSM) The Global System for Mobile (GSM) is a worldwide dominant system that originally evolved as a pan-European digital standard, and built a base in the US and Canada at a rapid case.
GSM uses Time Division Multiple Access (TDMA). TDMA is a not a spread spectrum technology. It uses a narrow band that is 30 kHz wide and 6.7 milliseconds long. This is split time-wise into slots. Each conversation gets the radio for part of the time. TDMA allows eight simultaneous calls on the same radio frequency.
GSM larger user base is its biggest advantage. This, and the roaming facility it allows, gives it an edge. Use of SIM cards is its other advantage. As in January 2003, GSM had about 60 million subscribers and a geographical reach of up to 97 per cent of the world population.
In CDMA, the data is digitized and spread over the entire available bandwidth, unlike the narrow band of TDMA. Multiple calls are overlaid on each other on the channel, with each assigned a unique sequence code. The data is then reassembled at the receiver’s end. The battery life of CDMA handsets is longer than that of analogies phones, with a talk time of three to four hours and up to two-and-a-half weeks of standby time.
The disadvantage with CDMA handsets is that at the moment, these phones do not have a SIM card and are unique to the network they are initiated on. Limited roaming service is its other drawback.
Vital Differences
Characteristics
CDMA
GSM
Capacity
May increase but only when a sufficient percentage of compatible handsets are in use
Reduces capacity by using voice slots
Use
Voice or data
Data only
Scope of upgrade hardware
Radio cards and data routing cards, voice-switching hardware, data routine hardware.
Equipment frames, radio
Observed data speeds (unconfirmed)
40-60 kbps
20-40 kbps
Claimed maximum data speeds
144 kbps
115 kbps
Handset compatibility
All existing ones can access voice systems
New terminals required to access the voice system
Note: kbps stands for kilobytes per second.
Laser & Photonics
Have you ever wondered, how do we write a CD in a CD writer for copying the information or data stored in one onto the other or reading of it? Or for that matter, did you ever realize what a showroom manger does to tell you the price of a product by simply reading into the price code bars printed on a product through an illuminating device & so on….? Infact, there is examples galore wherein, a specially created source of light comes into play making us simply to gesticulate with amazement to know that here comes the science of physics in live action…!
Without a surprise, in either of the two cases as being exemplified above and in many other similar instances familiar to us, it is practically the use of some specially created and modified beams of light what we call as LASERS.
What is a LASER? LASER is basically an acronym for (Light Amplification by Stimulated Emission of Radiation) and is essentially a device that makes the said amplification of light in the sense that it helps in producing an intensified, monochromatic electromagnetic radiation. As we already know that the light in its particulate nature comes to us in the form of small packets of energy called as photons. As a substance absorbs these photons of light, its atoms gets excited and electrons jumps to a high energy level. When these electrons at a high energy level are again stimulated by bombarding photons onto them, they jump back to a lower energy level and in the process, releases a photon or photons that has the same frequency as the one being used for stimulating them and even travel in the same direction. With a sufficient number of electrons at high energy levels (in excited atoms) once stimulated by bombarding photons onto them such that they will cause further emission of photons of the same energy and frequency while returning back to lower energy levels. This process of stimulating and stimulated release of photons will cause a stimulated emission of radiations that come as a narrow beam of monochromatic radiations whose waves are parallel and in phase such that the said beam of light or radiation shall be noted for having extremely high energy with almost negligible spread or diffraction. A beam of light produced this way through a device is what we call a LASER beam.
Going by the many virtues of laser light or beams, most importantly of its being very powerful in nature, it has found its use in diverse applications right from medical diagnostic instruments to playing music in your CD player or reading price code bars to cutting & welding metal pieces and right well up to its use in the information transmission. Not too far, LASER does find its application in repairing damaged eyes (LASIK surgery) to even guiding a missile onto its target.
A typical laser device is made up of the following four essential primary components:
As the use of optical fibers in communication technology is well understood given the fact of their information carrying capacity over large distances. The signal carrying capacity of these optical fibers has further been increased and that too with considerably a far less signal wastage by making the use of laser beams inside optical fibers as information carrying signals, the use of which has indeed, offered many advantages. Fibre optic laser systems today are being used in various communication networks. Many long-haul fibre communication networks making use of Lasers, for both transcontinental connections and, through undersea cables, international connections are in operation. One advantage of optical fibre systems is the long distances that can be maintained before signal repeaters are needed to regenerate signals. These are currently separated by about 100 km (about 62 miles), compared to about 1.5 km (about 1 miles) for electrical systems. Newly developed optical fibre amplifiers can extend this distance even farther. Local areas networks (LAN) are another growing application for fibre optics. Unlike long-haul communications, these systems connect many local subscribers to expensive centralized equipment such as computers and printers. This system expands the utilization of equipment which can easily accommodate new users on a network. Development of new electro-optic and integrated-optic components will further expand the capability of optical fibre systems.
Lasers in Everyday Physics: As noted above, the very nature of laser beams make their indispensable applications in as diverse fields as in equipments used in homes, factories, offices, hospitals and libraries right up to the high end technologies associated with information transmission via fibre optics, storage and copying of the information on compact discs and missile guidance in defence technology. Yet, some of the noteworthy and most glaring applications of lasers today deserve a special mention herein below:
Storage & transmission of Information: The most common uses of lasers include the recording of music, motion pictures, computer data and other material on compact discs. Bursts of laser light record such material on the discs in patters of tiny pits. A laser beam’s tight focus allows much more information to be stored on a CD or DVD than on a phonograph record thereby, making them good agents for holding data as well as music and movies.
Reading stored information: Lasers can also be employed for reading and playing back information recorded on discs. In a CD/DVD player, a laser beam reflects off the pattern of pits as the compact disc spins. Other devices in the player change the reflections into electrical signals and decode them as music. More amounts of lasers are used in CD/DVD players than in any other electronic product or gadget.
Holography: Laser beams can produce three-dimensional images in a photographic process called as holography. Holography is a method for storing and displaying a three-dimensional image usually on a photographic plate or any other light-sensitive material. Since the laser exposed plate is called a hologram and hence, the name “holography” is given to the technique. Some typical examples where holography is being used is the credit cards that carry so called holograms to prevent counterfeiting.
Fibre-optics: One of the laser’s greatest use is in the filed of fibre-optics communication. This technology changes electrical signals of telephone calls and television pictures into pulses (bursts) of laser light that is then transmitted by the long strands made of glass what we call as optical fibres.
Scanning: Scanning involves the movement of a laser beam across a surface. Scanning beams are often used to read information. Laser scanners used at supermarket checkout counters are a familiar sight. What that looks like merely a line of light is actually a rapidly moving laser beam scanning a bar code. A bar code in fact, consists of a pattern of lines and spaces on packages that identifies the product. The scanner reads the pattern and sends the information to a computer in the store. The computer identifies the item’s price and sends the information to the register. In addition, such scanners keep track of books in libraries, sort mail in post offices as well as read account numbers on cheques in banks. Laser printers use a scanning laser beam to produce copies of documents. Other scanners make printing plates for newspapers.
Entertainment: In entertainment, laser light shows are aerated with scanning laser beams. These beams can “draw” spectacular patterns of red, yellow, green and blue light on buildings or other outdoor surfaces. The laser beams here move so rapidly that they produce what otherwise looks like a stationary picture.
Heating: A laser beam’s highly focused energy can produce a great amount of heat. Industrial lasers, for example, produce beams of thousands of watts of power. They thus are used for cutting and welding metals or for drilling holes.
Medicine: In medicine, the heating power of lasers is often used in eye surgery; a high end surgical procedure called as LASIK (Laser in-situ keratomalacia). In this case, highly focused laser beams are used to close off broken blood vessels in the retina, a tissue in the back of the eyeball or can re-attach a loose retina. Laser beams pass through the cornea (front surface of the eye) but cause no pain or damage to it because, the cornea is transparent and does not absorb light. Doctors also use lasers to treat skin disorders, remove birthmarks or to shatter gallstones or kidney stones in a procedure called as Lithotripsy.
Nuclear technology: In nuclear energy research, scientists use lasers to produce controlled, miniature hydrogen bomb explosions. They focus many powerful laser beams onto a pellet of frozen forms of hydrogen. The intense beams compress the pellet and heat it by millions of degrees. These actions cause the nuclei to fuse and release energy, a prototype experiment to display fusion bomb technology.
Biochemistry: Some laser systems, through the process of mode locking, can produce extremely brief pulses of light – as short as picoseconds or femtoseconds. Such pulses can be used to initiate and analyze chemical reactions. This method is particularly useful in biochemistry where it is used to analyze details of protein folding and their function.
Measurements: Lasers are also used to measure distance. An object’s distance can be determined by measuring the time, a pulse of laser light takes to reach and reflect back from the object. Laser beams directed over long distances also can detect small movements of the ground. Such measurements help geologists involved in earthquake warning systems. Laser devices used to measure shorter distances are called range finders. Surveyors use these devices to get information needed to make maps. Military personnel however, use them to calculate the distance to an enemy target.
Guidance: A laser’s strong, straight beam makes it as a valuable tool for guidance. For example, construction workers use laser beams as “weight less strings” to align the walls and ceilings of a building and to lay straight sewer and water pipes. Moreover, the special instruments called laser gyroscopes make use of laser beams to detect changes in direction and thus, find their extensive use in ships, airplanes and guided missiles to help them stay on course. Another military use of lasers is in a guidance device called a target designator. A millitary personnel using this device aims a laser beam at an enemy target. Missiles, artillery, shells and bombs equipped with laser beam detectors seek the reflected beam and adjust their flight to hit the spot where the beam is aimed.
Laser cooling: A technique that has had met with recent success is laser cooling. This involves ion or atom trapping wherein a number of ions or atoms are confined in a specially shaped arrangement of electric and magnetic fields. Shining particular wavelengths of laser light slows them down, thus cooling them off. If the process is continued eventually, they all are slowed down and have the same energy level thereby, forming an unusual arrangement of matter known as a Bose-Einstein condensate (first conceptualized by S.N. Bose).
What is Photonics? “Photonics” broadly, refers to the study of light, but where “light” includes much more than just the visible wavelengths of the light spectrum just because, in photonics the energy and information is being carried or transmitted by the so called particles of light what we call as photons (particles of light) and not by the electrons as it is in conventional electronics. Photonics thus, uses the wave/particle nature of light to create new and high end technology laced optical materials and devices.
In fact, owing to the considerable research currently being undertaken in this newly emerging field of experimental Physics, photonics is likely to replace conventional electronics as well as electronic components with that of the optical components manufactured under the aegis of photonics thereby, offering a wide range of revolutionary possibilities in the domain of information storage or transmission including a significant increase in data processing speed in computational systems. Given thus, a new offshoot of electronics is very much in offing what we will soon acquaint ourselves with called as “optoelectronics.”
The field of photonics is huge and applications can be found in virtually all technological industries. The long list of photonic applications has grown practically by the day.
Some examples of the major application areas of photonics are mentioned below:
Electronics:
Health:
Industrial Process Control:
Instrumentation:
Telecom:
Telecommunication is in fact, a wider term and refers to a system or a set of devices that can send or transmit the information over long distances after converting the same into electronic or electrical signals. The telecommunication devices thus, convert different types of information such as sound (audio), video (motion pictures), or words etc into electronic signals. The said information what we also call as messages once being converted into electronic signals are then sent to the respective media devices such as sound over the telephone, video over the TV & words or pictures over the PC and hence, the recipient receives the same.
In short, Telecommunications begin with messages that are converted into electronic signals. The signals are then sent over a medium to a receiver, where they are decoded back into a form that the person who message is being sent to, could understand. There are a variety of ways to create and decode signals and even many different ways to transmit signals.
Significantly, all telecommunication services whether telephone, television or radio or for that matter, IT (including internet etc) have their functional basis in the communication satellites because; all these telecommunication services make the use of radio frequency waves better, called as radio-frequency signals that are essentially transmitted or relayed down to the earth by a communication satellite up there orbiting the earth in a fixed orbit.
A communication satellite in fact, includes any earth-orbiting spacecraft that provides communication over long distance by reflecting or relaying radio-frequency signals. Satellite relay systems today, have in a way revolutionized the communication by making worldwide telephone links and live broadcasts, a common occurrence.
A communication satellite actually works by receiving a microwave signal (a signal of different frequency) from a ground station on the earth (the uplink), amplifies the same and then retransmits the signal back to a receiving station or stations on earth but at a different frequency (generally at radio frequency range) (the downlink).
A communication satellite used for all telecommunication services such as telephone, television etc. is always placed in a geosynchronous orbit, which means that the same is orbiting the earth at the same speed as the earth is revolving on its own axis. The satellite thus, stays in the same position relative to the surface of the earth, so that the broadcasting station on the ground will never lose contact with the receiver.
Some of the first communications satellites were designed to operate in passive mode. That means, instead of their actively transmitting the radio signals down to the earth, they served merely to reflect signals that were beamed up to them by transmitting stations on the ground of course, without subjecting them to any change in their frequency on their part. Signals thus, were reflected by them in all directions, so they could be picked up by any or all receiving stations around the world.
A satellite in a geosynchronous orbit follows a circular orbit over the equator at an altitude of 35,800 kms, an equivalent of (22,300 miles) or to be rounded off to approx.36000 kms thereby, completing one orbit around the earth, every 24 hours, the time that it takes the earth to rotate once on its own axis. Moving in the same direction as the earth’s rotation, the satellite thus, remains in a fixed position over a point on the equator, thereby, providing uninterrupted contact between ground stations in its line of sight. Most communications satellites that followed over the years were also placed in geosynchronous orbit.
As we already know that a radio anatomically, is made, but of the same heart of the electronic devices as what constitutes the anatomy of almost all or any other electronic device for that matter. To this heart of an electronic device, we call a transistor. However, functionally as being a telecommunication device, it makes the use of same type of signals carrying sound called as radio signals for the purposes of transmitting information electronically. The conventional radio although, used the same radio waves by making a modification in the amplitude of radio waves only. Whereas, the new breed of radio transmission called FM works by modifying the radio waves in respect of their frequency and hence, the name. Of these two, of course, the latter and latest, FM, certainly has the sheer advantage of its own as compared to the former, AM as are being enumerated below for a clear understanding…
AMPLITUDE MODULATION OR AM:
Amplitude modulation or AM is a form of modulation used for radio transmissions for broadcasting and two-way radio communication applications. Although, it is one of the earliest used forms of modulation yet, it is still in widespread use even today.
How it works? In order that a radio signal can carry audio or other information for broadcasting or for two way radio communication, it must be modulated or changed in some way. Although, there are a number of ways in which a radio signal may be modulated, but generally we just change its amplitude in line with variations of the sound. In fact, the basic concept surrounding the AM is quite straight-forward. The amplitude of the signal is changed in line with the instantaneous intensity of the sound. In this way, the radio frequency signal has a representation of the sound wave superimposed on it. In view of the way in which the basic signal “carries” the sound or modulation, the radio frequency signal is often termed as the “carrier”.
AM broadcasting: It is the process of radio broadcasting using amplitude modulation. AM receiver detects amplitude variations in the radio waves at a particular frequency. It then amplifies changes in the signal voltage to drive a loudspeaker or earphones.
Advantages of AM: There are several advantages of amplitude modulation, and some of these advantages reasons out why it is still in widespread use today:
I) It is simple to implement;
II) It can be demodulated using a circuit consisting of very few components;
III) AM receivers are very cheap as no specialized components are needed.
Disadvantages of AM: AM is a very basic form of modulation and although, its simplicity is one of its major advantages, other more sophisticated systems provide a number of advantages. Accordingly, it is worth looking at some of the disadvantages: (i) it is not efficient terms of its power usage; (ii) It is not efficient in terms of its use of bandwidth, requiring a bandwidth equal to twice that of the highest audio frequency; and (iii) it is prone to high levels of noise because most noise is amplitude based and obviously AM detectors are sensitive to it.
FREQUENCY MODULATION OR FM:
While changing the amplitude of a radio signal is the most obvious method to modulate it, it is by no means the only way. It is also possible to change the frequency of a radio signal to give frequency modulation or FM. Frequency modulation is widely used on frequencies of about 30 MHz, and it is particularly well known for its use for VHF FM broadcasting. Although it may not be quite as straightforward as AM, nevertheless FM, offers some distinct advantages vis a vis AM. It is able to provide near interference free reception, and it was for this reason that it was adopted for the VHF sounds broadcasts. These transmissions could offer high fidelity audio, and for this reason, FM is far more popular than the older transmissions on the long, medium and short wave bands. In addition to its widespread use for high quality audio broadcasts, FM is also used for a variety of two way audio communication systems. Whether for fixed or mobile radio communication systems or for use in portable applications, FM is widely used at VHF and above.
What is FM? To generate a frequency modulated signal, the frequency of the radio carrier is changed in line with the amplitude of the incoming audio signal. When the audio signal is modulated on to the radio frequency carrier, the new radio frequency signal moves up and down in frequency. The amount by which the signal moves up and down is important. It is known as the deviation and is normally quoted as the number of kilohertz deviation.
What is WBFM? Broadcasts stations in the VHF portion of the frequency spectrum between 88.5 and 108 MHz use large values of deviation, typically ± 75 kHz. This is known as wide-band FM (WBFM). These signals are capable of supporting high quality transmissions, but occupy a large amount of bandwidth. Usually 200 kHz is allowed for each WBFM transmission.
What is NBFM? For communications purposes less bandwidth is used. Narrow band FM (NBFM) often uses deviation figures of around ± 3 kHz. It is NBFM that is typically used for two-way radio communications, but this is not needed for applications such as mobile audio communication.
Advantages of FM: FM is used for a number of reasons and there are several advantages of frequency modulation. In view of this, it is widely used in a number of areas to which it is ideally suited. Some of the advantages of frequency modulation are noted below:
Thus, we now clearly understand that frequency modulation or (FM) system of radio transmission is the one in which the carrier wave is modulated so that its frequency varies with the audio signal being transmitted by it.
The first workable system for radio communication was described by the American inventor Edwin H. Armstrong in 1936, who otherwise has had immense contributions to the science of Electronics particularly, during its nascent stage.
Frequency modulation has several advantages over the system of Amplitude Modulation (AM) used in the alternate form of radio broadcasting. The most important of these advantages is that the FM system has a greater freedom from interference and static. Various electrical disturbances, such as those caused by thunderstorms and automobile ignition systems; create amplitude modulated radio signals (as used in AM transmission) that are received as noise by AM receiver. A well-designed, FM receiver is not sensitive to such disturbances when it is tuned to an FM signal of sufficient strength. Also, the signal-to-noise ratio in an FM system is much higher than that of an AM system. Finally, FM broadcasting stations can be operated in the very high frequency bands at which AM interference is frequently severe; commercial FM radio stations are assigned frequencies between 88 and 108 MHz. The range of transmission on these bands is limited so that stations operating on the same frequency can be located with a few hundred miles of one another without interference.
These features, coupled with the comparatively low cost of equipment for an FM broadcasting station, resulted in its rapid growth in the years following World War II (1939-1945). Because of crowding in the AM broadcast band and the inability of standard AM receivers to eliminate noise, the tonal fidelity of standard stations is purposely limited. FM does not have these drawbacks and therefore, can be used to transmit musical programs that reproduce the original performance with a degree of fidelity that cannot be reached on AM bands.
Physics in natural phenomena
Questions
Scientific Explanation
A man with a load on his head jumps from a high building. What will be the load experienced by him? Concept: Gravitation & Gravity.
Zero, because the acceleration of his fall is equal to the acceleration due to gravity of the earth.
Why is spring made of steel and not copper? Concept: Hook’s Law
The elasticity of steel is greater than that of copper.
Which is more elastic, rubber or steel? Concept: Hook’s Law
Steel is more elastic for the same stress produced in rubber is more than that in steel, the elasticity is 1/strain.
Why is it easier to spray water to which soap is added? Concept: Surface tension
Addition of soap decreases the surface tension of water. The energy for spray is directly proportionate to surface tension.
A piece of chalk when immersed in water, emits bubbles. Why?
Concept: Capillarity/Capillary Action
Chalk consists of pores forming capillaries. When it is immersed in water, the water begins to rise in the capillaries and air present there is expelled in the form of bubbles.
Why does a liquid remain hot or cold for a long time inside a thermo flask?
Concept: Conduction & Convection
Because of the presence of air, which is a poor conductor of heat, in between the double glass walls of a thermo flask.
Why does a ball bounce up on falling?
Concept: Elasticity & Newton’s law
When a ball falls, it is temporarily deformed. Due to elasticity the ball tends to regain its original shape, for which it presses the ground and bounces up (Newton’s third law of motion).
Why is standing in boats or double-decker buses not allowed, particularly in the upper floor of buses?
On tilting the centre of gravity of a boat or bus is lowered and it is likely to overturn.
Why is the boiling point of sea water more than that of pure water?
Sea water contains salts and other impurities with different boiling points which jointly raise its boiling point.
Why is it recommended to add salt in water while boiling grams?
By addition of salt the boiling point of water gets raised which helps in cooking.
Why is soft iron used as an electromagnet?
Concept: Controlled magnetism property.
Because it remains magnetic only till the current passes through the coil and loses its magnetism when the current is switched off (principle of electric bells).
Why is the sky blue?
Concept: Scattering of light; blue light being of shorter wavelength scattered the most.
Violet and blue light have short waves and are scattered more than red light waves. While red light goes almost straight through the atmosphere, blue and violet are scattered by particles in the atmosphere. Thus, we see a blue sky.
Why does ink leak out of a partially filled pen when taken to a higher altitude?
Concept: Low Atmospheric pressure at higher altitudes.
As we go up the pressure and the density of air go on decreasing. A partially filled pen leaks when taken to a higher altitude because the pressure of air acting on the ink inside the tube of the pen is greater from the pressure of air outside.
On the moon will the weight of a man be less or more than his weight on the earth? Concept: Gravity on moon is 1/6th of the earth.
The gravity of the moon is one-sixth that of the earth; hence the weight of a person on the surface on the moon will be one-sixth of his actual weight on the earth.
Why do some liquids burn while others do not? Concept: Combustion
A liquid burns if its molecules can combine with oxygen of the air with the production of heat. Hence, oil burns but water does not.
Oil and water do not mix. Why?
Concept 1: Likes do not dissolve unlike.
Concept2: Specific gravity of oils (fats) is less than that of the water.
Molecules of oil are bigger than that of water and therefore do not mix easily.
Molecules of water are polar, i.e. they have opposite charges at two ends whereas oil molecules do not; as a consequence they tend to stay away from water molecules.Since oils are essentially fats which have very less specific gravity as compared to water hence, all fats float on water & are immiscible with water.
How can we see ourselves in a mirror? Concept: Reflection of light.
We see objects when light rays from them enter our eyes. As mirrors have a shiny surface, the light rays are reflected and come back to us and enter our eyes.
Why does a solid chunk of iron sink in water but float in mercury? Concept: Laws of floatation & sinking.
Because the density of iron is more than that of water but less than that of mercury.
Why do stars twinkle?
Concept: Refraction of Light.
The light from a star reaches us after refraction as it passes through various layers of air. When the light passes through the earth’s atmosphere, it is made to flicker by the hot and cold ripples of air and it appears as if the stars are twinkling.
Why is cooking quicker in a pressure cooker? Concept: Increase in the boiling point.
As the pressure inside the cooker increases, the boiling point of water is raised, hence, the cooking process is quick.
When wood burns it crackles. Why?
Concept: Escape of the volatile gases and vapors from the burning wood.
Wood contains a complex mixture of gases and tar-farming vapours trapped near the surface. These gases and tar vapours escape making a crackling sound.
If a feather, a wooden ball and a steel ball fall simultaneously in a vacuum, which one of these would fall faster?
Concept: Laws of Gravity & Gravitation.
All will fall at the same speed in a vacuum because there will be no air resistance and the earth’s gravity will exert a similar gravitational pull on all.
When a man fires a gun, he is pushed back slightly. Why?
Concept: Newton’s third law of motion.
As the bullet leaves the nozzle of the gun’s barrel with momentum in a forward direction as per Newton’s third law of motion the ejection of the bullet imparts to the gun an equal momentum in a backward direction.
Why does a body weigh slightly more at the poles than at the equator?
Concept: Maximum magnetism of the earth and hence, high gravitational pull
The gravitational pull of the earth is more at the poles because the poles being nearer to the centre of the Earth & thus, the weight of a body is greater at this point.
Why is it easier to roll a barrel than to pull it? Concept: Force of Friction.
Because the rolling force of friction is less than the dynamic force of sliding friction.
Ice wrapped in a blanket of saw dust does not melt quickly. Why? Concept: Heat transmission by conduction.
Both wool and wood are bad conductors of heat. They do not permit heat rays friction.
Why do we perspire on a hot day?
Concept: Evaporation has a cooling effect
When the body temperature rises, the sweat glands of the body are stimulated to secrete perspiration. It is nature’s phenomenon to keep the body cool. During the process of evaporation of sweat some body heat is taken away thus giving a sense of coolness.
Why do we perspire before rains?
Concept: Vapor pressure
Before the rain falls, the atmosphere gets saturated with water vapours as a result the process of evaporation of sweat is delayed.
Why does ice float on water but sink in alcohol? Concept: Water expands at zero degree & ice formed is less dense as compared to water.
Because ice is lighter than water it floats on it. However, ice is heavier than alcohol and therefore it sinks in alcohol.
Why does a thermometer kept in boiling water show no change in reading?
The boiling point of water is 1000C. Once water starts boiling at this temperature, thermometer records no change in temperature. The quantity of heat supplied is being utilized as latent heat of evaporation to convert the water at boiling point into vapour.
Why do we bring our hands close to the mouth while shouting across to someone far away?
By keeping hands close to mouth, the sound is not allowed to spread (phenomenon of diffraction of sound) in all directions but is directed to a particular direction and becomes louder.
Why does a corked bottle filled with water burst if left out on a frosty night?
Due to low temperature the water inside the bottle freezes. On freezing it expands, thereby its volume, increases and pressure is exerted on the walls.
Why is small gap left at the joint between two rails?
To permit expansion of rails due to heat generated by friction of the moving train.
A cyclist has to use more force at the start than when the cycle is in motion. Explain.
Momentum has to be produced to set a cycle in motion requiring more force but once momentum has been gained and set in smaller force is required to maintain it.
Why cannot a copper wire be used to make elements in electric heaters?
Copper melts at 1,0830C, and forms a black powder on reacting with atmospheric oxygen. For heater element a metal should have more resistance to produce heat.
Why is water or mercury always round when dropped on a clean glass?
The surface of a liquid is the seat of a special force as a result of which molecules on the surface are bound together to form something like a stretched membrane. They tend to compress the molecules below the smallest possible volume which causes the drop to take a round shape as for a given mass the sphere has minimum volume. This entire is attributed to surface tension.
Why does a balloon filled with hydrogen rise in the air?
Normally, in balloons hydrogen is filled which is lighter than air. Its weight is less than the weight of air displaced by it.
Why does smoke curl up in the air?
Smoke contains hot gases which, being lighter in weight, follow a curved path because of the eddy currents that are set up in the air.
Why do we lean forward while climbing a hill?
In order to keep the vertical lines passing through our centre of gravity always between our feet to attain balance or stability.
Why does an electric bulb explode when it is broken?
The bulb is a partial vacuum and as it breaks air rushes in causing a small explosion.
Why does a man fall forward when he jumps out of a running train? Or so when a sudden brake is applied in a running vehicle.
He is in motion while in the train. When he jumps out, his feet come to rest while touching the ground but his upper portion which is still in motion takes him forward.
Why does an ordinary glass tumbler crack when very hot tea is poured in it?
When the inner layer of the tumbler gets heated it expands before the outer layer and an unequal or mild expansion of both layers causes the tumbler to crack.
Why is a compass used as an indicator of direction?
The magnetic needles of a compass under the influence of the earth’s magnetic filed lie in a nearly north-south direction. Hence we can identify direction.
Why is water from a hand pump warm in winter and cold in summer?
In winter the outside temperature is lower than that of water flowing out of the pump and, therefore, it feels warm. Whereas in summer the outside temperature is higher than the water of the pump and, therefore, it feels cold.
Why is a rainbow seen after a shower? Concept: Dispersion of light.
After a shower the clouds containing water droplets act like a prism through which the white light is dispersed and produces a spectrum.
Why does a swimming pool appear less deep than it actually is?
Concept: Refraction of light.
The rays of light coming from the bottom of the pool pass from a denser medium (water), to a rarer medium (air) and are refracted (bend away from the normal). When the rays return to the surface, they form an image of the bottom of the pool at a point which is little above the real position.
Why does kerosene oil float on water?
Concept: Density of water.
Because the density of kerosene oil is less than that of water. For the same reason cream rises in milk and floats on the top.
Why is one’s breath visible in winter but not in summer? Concept: Conduction of heat.
In winters, the outside temperature is far below the constant body temperature and thus, conduction of heat occurs.
Why does not the electric filament in an electric bulb burn up?
Concept: Presence of vacuum or neon gas instead of oxygen that is essential for combustion of the filament.
First because it is made of tungsten which has a very high melting point (34100C) whereas the temperature of the filament required to glow is only 27000C. Second, oxygen is absent since the bulb is filled with an inert gas (Neon) which does not help in burning.
Why does blotting paper absorb ink?
Concept: Capillarity
Blotting paper has fine pores which act like capillaries. When a portion of blotting paper is brought is contact with ink, due to surface tension (capillary action of liquids) ink enters the pores and is absorbed.
Why does a small ball of iron sink in water but a ship float?
The weight of water displaced by an iron ball is less than its own weight, whereas water displaced by the immersed portion of a ship is equal to its weight (Archimedes Principle).
Why does ice float on water?
The weight of the ice block is equal to the weight of the liquid displaced by the immersed portion of the ice.
A tumbler is filled to the brim and a piece of ice is placed in it. When the ice melts will the tumbler containing cold water?
The water vapour of the air condenses on cooling and appears as droplets of water.
Why is the water in an open pond cool even on a hot summer day?
As the water evaporates from the open surface of a pond, heat is taken away in the process, leaving the surface cool.
Why is it difficult to cook rice or potatoes at higher altitudes?
Atmospheric pressure at higher altitudes is low and boils water below 1000C; because the boiling point of water is directly proportional to the pressure on its surface.
Why is difficult to breathe at higher altitude?
Due to low air pressure at higher altitudes the quantity of air is less and so also that of oxygen.
Why are winter nights warmer in a cloudy weather and summer nights hotter in a cloudy weather than when the sky is clear?
Clouds being bad conductors of heat do not permit radiation of heat from land to escape into sky. As this heat remains in the atmosphere, the cloudy nights are warmer.
Why are cloudy days cooler?
Clouds do not permit the radiation of the sun to reach the earth.
Why is a metal tyre heated before it is fixed on wooden wheels?
On heating, the metal tyre expands by which its circumference also increases. This making fixing the wheel easier and thereafter cooling down shrinks it. Thus, securing it tightly.
Why is it easier to swim in the sea than in a river?
The density of sea water is higher; hence the upthrust is more than that of river water.
Who will possibly learn swimming faster – a fat person or a thin person?
The fat person displaces more water which will help him float mere freely compared to a thin person.
Why is a flash of lightning seen before thunder?
Because light travels faster than sound; it reaches the earth before the sound of thunder.
Why cannot petrol fire be extinguished by water?
Water, which is heavier than petrol, slips down permitting the petrol to rise to the surface and continue to burn. Besides, the existing temperature is so high that the water poured on the fire evaporates even before it can extinguish the fire. The latter is true if a small quantity of water is poured.
Why does water remain cold in an earthen pot?
There are pores in the earthen pot which allow water to percolate to the outer surface. Here evaporation of water takes place thereby producing a cooling effect.
Why does an ordinary pendulum clock lose time in summer?
In summer, due to heat, the length of the pendulum increases. There in turn results in the increase of duration of each oscillation of the pendulum. Therefore, the clock loses time in summer.
Why are mercury thermometers not used to measure very low temperatures?
The freezing point of mercury is only 390C and it also has non-uniform expansion at low temperatures.
Why do we place a wet cloth on the forehead of a patient suffering from high temperature?
Due to body’s temperature, water evaporating from the wet cloth produces cooling and brings the temperature down.
Why do we apply Eau-de-Cologne to a person having high temperature?
Eau-de-Cologne contains alcohol which is quickly evaporated and takes away much of the local heat from the body of the person.
When a needle is placed on a small piece of blotting paper which is placed on the surface of clean water, the blotting paper sinks after a few minutes but the needle floats. However, in a soap solution the needle sinks. Why? Concept: Decrease in S. tension
The surface tension of clean water being higher than that of soap solution, it can support the weight of needle due to its surface tension. By addition of soap, the surface tension of water reduces, thereby sinking the needle.
To prevent growth of mosquitoes, it is recommended to sprinkle oil in the ponds with stagnant water. Why?
Concept: Addition of kerosene reduces the surface tension of the water
Mosquitoes breed in stagnant water. The larvae of mosquitoes keep floating on the surface of water due to surface tension. However, when oil is sprinkled, the surface tension is lowered resulting in drowning and death of the larvae.
Why does oil rise on a cloth tape of an oil lamp? Concept: Capillarity
The pores in the cloth tape suck oil due to the capillary action of the oil.
Why are ventilators in a room always made near the roof?
Concept: Convection current
The hot air being lighter in weight tends to rise above and escape from the ventilators on the top.
Why are chimneys of factories using boilers high?
The gases produces in boilers are hot and being lighter in weight tend to go up. The Chimneys exhaust these gases above in the atmosphere without polluting the lower layer of atmosphere.
How does ink get filled in a fountain pen?
Concept: Bournille’s principle
When the rubber tune of a fountain pen immersed in ink is pressed, the air inside the tube comes out and when the pressure is released the ink rushes into fill the air space in the tube.
Why are air coolers less effective during the rainy reason?
Concept: Slow or even negligible rate of evaporation
During the rainy season the atmospheric air is saturated with humidity. Therefore, the process of evaporation of water from the moist pads of the cooler slows down, thereby not cooling the air blown out from the cooler.
Why does grass gather more dew in nights than metallic objects like stones?
Concept: Transpiration and Root pressure
Grass being a good radiator enables water vapour in the air to condense on it. Moreover, grass gives out water constantly (transpiration) which appears in the form of dew because the air near grass is saturated with water vapour and slows evaporation. Dew is forced on objects which are good radiators and bad conductors.
Why powdered sugar dissolves faster than the crystalline sugar?
Concept: Larger surface area; smaller the size, larger surface area.
Because it has more surface area. The greater the surface area of the solid, the faster is the rate of reaction. Note that smaller the particle size, greater shall be the surface area.
Eggs and rice do not boil easily at high altitudes. Why?
Concept: Low atmospheric pressure and hence, low boiling point.
Atmospheric pressure is lesser at higher altitudes. Therefore, water starts boiling at lower temperature. Thus, it takes more time to boil the rice and/or eggs at higher altitudes.
A lump of coal burns at moderately in air while coal dust burns explosively. Why?
The surface area of a lump of coal is much lesser than the dust of coal. As such, much area of contact of coal dust provides excessive access to air. As a result, dust burns explosively while a lump of coal burns moderately.
Why does the temperature of a metal wire rises when an electric current is passed through it?
Concept: A good conductor of electricity.
The electric current in a metal wire is due to the motion of electrons. During their motion, electrons collide with the oscillating positive ions in the conductor and impart part of their energy to them. As ions oscillate faster, their increased energy is manifested as heat.
Artificial satellites are always launched from the earth in the eastward direction. Why?
Concept: Initial upthrust.
Launching in the direction of the Earth’s rotation (namely, eastward) does provide an initial velocity equal to the surface velocity of rotation. If the Earth did not rotate, we would have to furnish that initial velocity equal to the surface velocity of rotation which is about 8 km/sec, but because it does, rockets launched eastward get a head start. However, satellites launched into a polar orbit back the advantage. Israel’s satellites have to be launched westward over the Mediterranean sea and acquire a boost larger than 8 km/sec.
By: Abhipedia ProfileResourcesReport error
Access to prime resources
New Courses