send mail to support@abhimanu.com mentioning your email id and mobileno registered with us! if details not recieved
Resend Opt after 60 Sec.
By Loging in you agree to Terms of Services and Privacy Policy
Claim your free MCQ
Please specify
Sorry for the inconvenience but we’re performing some maintenance at the moment. Website can be slow during this phase..
Please verify your mobile number
Login not allowed, Please logout from existing browser
Please update your name
Subscribe to Notifications
Stay updated with the latest Current affairs and other important updates regarding video Lectures, Test Schedules, live sessions etc..
Your Free user account at abhipedia has been created.
Remember, success is a journey, not a destination. Stay motivated and keep moving forward!
Refer & Earn
Enquire Now
My Abhipedia Earning
Kindly Login to view your earning
Support
Units of measurements
Measurement is the quantity to be measured with a reference standard. The reference standard of measurement is what is known. The basic units are seven in number and are units of length, mass time, electric current, temperature, light intensity and amount of substance. All other units, which can be derived from the basic units, are called as ‘derived units’. There are several systems of units, which are utilised for measurements.
CGS System, also centimeter-gram-second system (usually written “cgs system”), a metric system based on the centimeter (c) for length, the gram (g) for mass, and the second (s) for time. It is derived from the meter-kilogram-second (or mks) system but uses certain special designations such as the dyne (for force) and the erg (for energy). It has generally been employed where small quantities are encountered, as in physics and chemistry.
The FPS-system: uses the Foot, Pound and the Second for length, mass and time measurements respectively.
International System of Units (French Le Système International d’Unités), name adopted by the Eleventh General Conference on Weights and Measures, held in Paris in 1960, for a universal, unified, self-consistent system of measurement units based on the MKS (meter-kilogram-second) system. The international system is commonly referred to throughout the world as SI, after the initials of Système International. The Metric Conversion Act of 1975 commits the United States to the increasing use of, and voluntary conversion to, the metric system of measurement, further defining metric system as the International System of Units as interpreted or modified for the United States by the secretary of commerce.
At the 1960 conference, standards were defined for six base units and for two supplementary units; a seventh base unit, the mole, was added in 1971.
The meter and the kilogram had their origin in the metric system. By international agreement, the standard meter had been defined as the distance between two fine lines on a bar of platinum-iridium alloy. The 1960 conference redefined the meter as 1,650,763.73 wavelengths of the reddish-orange light emitted by the isotope krypton-86. The meter was again redefined in 1983 as the length of the path traveled by light in vacuum during a time interval of 1/299,792,458 of a second.
When the metric system was created, the kilogram was defined as the mass of 1 cubic decimeter of pure water at the temperature of its maximum density (4.0° C/39.2° F). A solid cylinder of platinum was carefully made to match this quantity of water under the specified conditions. Later it was discovered that a quantity of water as pure or as stable as required could not be provided. Therefore the primary standard of mass became the platinum cylinder, which was replaced in 1889 by a platinum-iridium cylinder of similar mass. Today this cylinder still serves as the international kilogram, and the kilogram in SI is defined as a quantity of mass of the international prototype of the kilogram.
For centuries, time has been universally measured in terms of the rotation of the earth. The second, the basic unit of time, was defined as 1/86,400 of a mean solar day or one complete rotation of the earth on its axis. Scientists discovered, however, that the rotation of the earth was not constant enough to serve as the basis of the time standard. As a result, the second was redefined in 1967 in terms of the resonant frequency of the cesium atom—that is, the frequency at which this atom absorbs energy, or 9,192,631,770 hertz (cycles per second). A solar day is the period between noons (It is the time at which the sun is at its highest point during its transit across the sky) of consecutive day. A mean solar day is the average solar day over a year.
The temperature scale adopted by the 1960 conference was based on a fixed temperature point, the triple point of water, at which the solid, liquid, and gas are in equilibrium. The temperature of 273.16 K was assigned to this point. The freezing point of water was designated as 273.15 K, equaling exactly 0° on the Celsius temperature scale. The Celsius scale, which is identical to the centigrade scale, is named for the 18th-century Swedish astronomer Anders Celsius, who first proposed the use of a scale in which the interval between the freezing and boiling points of water is divided into 100 degrees. By international agreement, the term Celsius has officially replaced centigrade.
In SI, the ampere was defined as the constant current that, flowing in two parallel conductors one meter apart in a vacuum, will produce a force between the conductors of 2 × 10-7 newtons per meter of length.
In 1971 the mole was defined as the amount of substance of a system that contains as many elementary entities as there are atoms in 0.012 kilogram of carbon-12.
The international unit of light intensity, the candela, was originally defined as 1/60 of the light radiated from a square centimeter of a blackbody, a perfect radiator that absorbs no light, held at the temperature of freezing platinum. It is now more precisely defined as the intensity of a light source, in a given direction, with a frequency of 540 x 1012 hertz and a radiant intensity of 1/683 watt per steradian in that direction.
The radian is the plane angle between two radii of a circle that cut off on the circumference an arc equal in length to the radius.
The steradian is defined as the solid angle that, having its vertex in the center of a sphere, cuts off an area of the surface of the sphere equal to that of a square with sides of length equal to the radius of the sphere.
The SI units for all other quantities are derived from the seven base units and the two supplementary units. Some derived units are used so often that they have been assigned special names—usually those of scientists
The National Physical Laboratory, New Delhi has the responsibility of maintenance and improvements of physical standards of length, time, mass etc.
A list in given below of the derived units employed in physical sciences.
Angstrom
Wavelength
Coulomb
quantity of electricity
Watt
Power
Watt hour
energy consumed or quantity of electricity utilised.
Joule
energy or work done
Newton
Force
Ohm
electrical resistance
Volt
Potential difference, or electromotive force (e.m.f.)
Hertz
wave frequency, or oscillations per second
Roentgen
quantity of radiation
Mechanics
It is concerned with the study of motion of objects. Motion is the change in the position of the body with a point, during a certain period.
Motion is of two types- linear (translational) and spin (rational). The movement of the car on the road is a linear motion, while a spinning wheel on an axis is a spin motion.
The speed of a moving body is the ability to cover a distance in a unit time.
The velocity is the rate of change of displacement, or the distance covered by object in specific direction in unit time interval.
Acceleration is the change of velocity.
Angular velocity is the rate of change of angle of a rigid body rotating about an axis.
Law of centre of gravity states that a body will remain in equilibrium on a firm support only if original position through the centre of gravity passes through the base of the body. The stability of an object is connected with the position of its centre of gravity. If the vertical line through the centre of gravity passes through the base of an object, then it is stable otherwise it is unstable object.
Equilibrium is a state of a body, when no resultant force and moment of forces act upon a body, and the body is said to be in equilibrium. The equilibrium may be stable unstable or neutral.
A stable equilibrium, is one in which a body returns to its original position after a small displacement.
An unstable equilibrium is one situation during which after a small displacement it does not return to its original position, e.g. a cone resting on its vertex.
A neutral equilibrium is said to be when in neutral equilibrium, if after a small displacement, it occupies a position similar to the original position, e.g., a cone resting on a side.
This word denotes a push or a pull. A force produces change in a body’s state of rest or of uniform motion in a straight line.
There are different kinds of forces such as tension, friction, gravitation etc.
Gravitational force is the force, which pulls everything towards the earth. The gravitational force exists between all bodies. It is gravitational force that holds the moon in its orbit round the earth and the earth in its orbit round the sun. Newton’s law of universal gravitation states that every particle in the universe attracts every other particle with a force.
Centripetal force is a type of force, directed towards the centre, and thence called as the centripetal force, and is necessary to produce continuous change of direction in a circular motion.
When a stone tied at one end of a string and circled around a hand, the pull in the string provides the centripetal force.
Centripetal force is a force, which act upon a body revolving in a circle. Centrifugal force equals and neutralizes the centripetal, or its acts outwards.
Weight is the force with which a body is attracted towards the centre of the earth. Weight is obtained by multiplying the mass of the body with acceleration due to gravity (g). Mass of the body remains constant where as weight will vary from place to place on the earth and elsewhere in the universe. This variation in weight is due to:
Weightlessness is a condition experienced by bodies when the gravitational attraction is zero. A person is standing on a weighing scale in a lift. When the lift is stationary, the scale shows his actual weight. When the lift accelerates upward, the scale will show a higher weight and when it accelerates downwards the scale will show less eight. It the cable of the lift breaks up and it starts falling freely, the reading on the scale becomes zero and the person will experience weightlessness.
Friction is a force, which opposes the relative motion of two surfaces, which are in the contact. It plays an important role in our day-to-day experiences like walking made easier, roll down of cars, etc on the roads.
The motion of physical bodies obeys three basic laws, which were formulated by Sir Issac Newton.
Flotation and Archimedes Principle
It is the force acting upward on an immersed solid, which is a reaction of the displaced fluid to the immersion. It acts at the centre of buoyancy. The weight of the immersed body acts downward, and at the centre of gravity of the solid.
Flotation is a condition of flotation in a fluid when the buoyancy acting on it is exactly equal to its weight in the air. This flotation is an equilibrium condition.
Archimedes principle states that when a solid body is partially or fully immersed in fluid, and it displaces fluid to an equal volume weight of the body.
The two forces act on a body, when it is immersed.
The magnitude of the two forces determines whether a body will float in the fluid or not.
Centre of buoyancy (CB) is the centre of gravity of the volume of the fluid displaced by the immersed body.
Thermodynamics
Field of physics that describes and correlates the physical properties of macroscopic systems of matter and energy. The principles of thermodynamics are of fundamental importance to all branches of science and engineering.
A central concept of thermodynamics is that of the macroscopic system, defined as a geometrically isolable piece of matter in coexistence with an infinite, environment. The state of a macroscopic system in equilibrium can be described in terms of such measurable properties as temperature, pressure, and volume, which are known as thermodynamic variables. Many other variables (such as density, specific heat, compressibility, and the coefficient of thermal expansion) can be identified and correlated, to produce a more complete description of an object and its relationship to its environment.
When a macroscopic system moves from one state of equilibrium to another, a thermodynamic process is said to take place. Some processes are reversible and others are irreversible. The laws of thermodynamics, govern the nature of all thermodynamic processes and place limits on them.
The vocabulary of empirical sciences is often borrowed from daily language. Thus, although the term temperature appeals to common sense, its meaning suffers from the imprecision of nonmathematical language. A precise, though empirical, definition of temperature is provided by the so-called zeroth law of thermodynamics as explained below.
When two systems are in equilibrium, they share a certain property. This property can be measured and a definite numerical value ascribed to it. A consequence of this fact is the zeroth law of thermodynamics, which states that when each of two systems is in equilibrium with a third, the first two systems must be in equilibrium with each other. This shared property of equilibrium is the temperature.
The first law of thermodynamics gives a precise definition of heat, another commonly used concept.
When an object is brought into contact with a relatively colder object, a process takes place that brings about an equalization of temperatures of the two objects. The first law of thermodynamics instead identifies caloric, or heat, as a form of energy. It can be converted into mechanical work, and it can be stored, but is not a material substance. Heat, measured originally in terms of a unit called the calorie, and work and energy, measured in ergs, were shown by experiment to be totally equivalent. One calorie is equivalent to 4.186 × 107 ergs, or 4.186 joules.
The first law, then, is a law of energy conservation. It states that, because energy cannot be created or destroyed—setting aside the later ramifications of the equivalence of mass and energy the amount of heat transferred into a system plus the amount of work done on the system must result in a corresponding increase of internal energy in the system. Heat and work are mechanisms by which systems exchange energy with one another.
The second law of thermodynamics gives a precise definition of a property called entropy. Entropy can be thought of as a measure of how close a system is to equilibrium; it can also be thought of as a measure of the disorder in the system. The law states that the entropy—that is, the disorder—of an isolated system can never decrease. Thus, when an isolated system achieves a configuration of maximum entropy, it can no longer undergo change: It has reached equilibrium. Nature, then, seems to “prefer” disorder or chaos. It can be shown that the second law stipulates that, in the absence of work, heat cannot be transferred from a region at a lower temperature to one at a higher temperature.
All important thermodynamic relations used in engineering are derived from the first and second laws of thermodynamics. One useful way of discussing thermodynamic processes is in terms of cycles—processes that return a system to its original state after a number of stages, thus restoring the original values for all the relevant thermodynamic variables. In a complete cycle the internal energy of a system depends solely on these variables and cannot change. Thus, the total net heat transferred to the system must equal the total net work delivered from the system.
An ideal cycle would be performed by a perfectly efficient heat engine—that is, all the heat would be converted to mechanical work. The 19th-century French scientist Nicolas Léonard Sadi Carnot, who conceived a thermodynamic cycle that is the basic cycle of all heat engines, showed that such an ideal engine cannot exist. Any heat engine must expend some fraction of its heat input as exhaust. The second law of thermodynamics places an upper limit on the efficiency of engines; that upper limit is less than 100 percent. The limiting case is now known as a Carnot cycle.
The second law suggests the existence of an absolute temperature scale that includes an absolute zero of temperature. The third law of thermodynamics states that absolute zero cannot be attained by any procedure in a finite number of steps. Absolute zero can be approached arbitrarily closely, but it can never be reached.
It is a form of energy, which is related to the temperature difference between bodies. The greater the internal energy of a substance, the hotter it will be.
Calorie-The quantity of heat necessary to raise I gram of water through 1C.
Calorimeter-A device is used to measure the quantity of heat involved in a system.
The temperature of a body is the quantity that tells how hot or cold body is with respect to some standard body.There are three scales, which are common in use. Theses are Kelvin (absolute), centigrade (Celsius) and Fahrenheit the relationship is given below as
There are three main temperature scales, Fahrenheit , Celsius, and Kelvin all named after the scientists who originated the scales. In certain contexts, one scale might be more appropriate to use than another scale and there are linear formulas that allow you to convert from one system to another.
To convert from Celsius to Fahrenheit, the equation is
F = (9/5)C + 32,
where F stands for the temperature measured in Fahrenheit and C stands for the temperature measured in Celsius..
To convert from Celsius to Kelvin, the equation is
K = 273+C
where K stands for the temperature measured in Kelvin and C stands for the temperature measured in Celsius.
To convert from Kelvin to Fahrenheit, we need to combine the equation
F = (9/5)C+32,
There are different kinds of thermometers used for different purposes:
All substances like solids, liquids and gases, expand when heated and contract when cooled. Expansion may be linear, surface and volume.
There are three ways by which transmission of heat occurs.
An important effect of heat is change of state.
There are three states or phases in which matter can exist solid, liquid and gas. A fourth state has been named as plasma, which exists at very high temperature in the ionic form
Heat engines are devices, which convert heat energy into mechanical energy repeatedly.
The heat is converted in a steam engine (used in turbines to generate electric power or to pull trains) or in an internal combustion engine (e.g. in cars, trucks, scooters etc) to mechanical energy at steady rate. Devices, which do the reverse, namely use mechanical work to remove heat, are also called as heat engines. They are more often called heat pumps, because they pump heat outside of the appliance.
The refrigerator is such a device which pumps heat outside the body.
The Carnot theorem states that no engine can be more efficient than a Carnot engine operating between the same two temperature limits.
Efficiency of a heat engine equals to 100%
Sound
Sound, physical phenomenon that stimulates the sense of hearing. In humans, hearing takes place whenever vibrations of frequencies between about 15 and 20,000 hertz reach the inner ear. The hertz, or Hz, is a unit of frequency equaling one cycle per second. Such vibrations reach the inner ear when they are transmitted through air, and the term sound is something restricted to such airborne vibration waves. Modern physicists, however, usually extend the term to include similar vibrations in liquid or solid media. Sounds of frequencies higher than about 20,000 Hz are called ultrasonic.
A sound wave, on the other hand, is a longitudinal wave. As the energy of wave motion is propagated outward from the center of disturbance, the individual air molecules that carry the sound move back and forth, parallel to the direction of wave motion. Thus, a sound wave is a series of alternate compressions and rarefactions of the air. Each individual molecule passes the energy on to neighboring molecules, but after the sound wave has passed, each molecule remains in about the same location.
Any simple sound, such as a musical note, may be completely described by specifying three perceptual characteristics: pitch, loudness (or intensity), and quality (or timbre). These characteristics correspond exactly to three physical characteristics: frequency, amplitude, and harmonic constitution, or waveform, respectively. Noise is a complex sound, a mixture of many different frequencies or notes not harmonically related.
The frequency of a sound wave is a measure of the number of waves passing a given point in 1 second. The distance between two successive crests of the wave is called the wavelength. The product of the wavelength and the frequency must equal the speed of propagation of the wave, and is the same for sounds of all frequencies (if the sound is propagated through the same medium at the same temperature). Thus, the wavelength of A above middle C is about 78.2 cm (about 2.6 ft), and the wavelength of A below middle C is about 156.4 cm (about 5.1 ft).
On a fixed-pitch instrument, such as a piano, it is not possible to arrange the notes so that all of these ratios hold exactly, and some compromise is necessary in tuning, called the meantone system, or tempered scale.
The amplitude of a sound wave is the degree of motion of air molecules within the wave, which corresponds to the extent of rarefaction and compression that accompanies the wave. The greater the amplitude of the wave, the harder the molecules strike the eardrum and the louder the sound that is perceived. The amplitude of a sound wave can be expressed in terms of absolute units by measuring the actual distance of displacement of the air molecules.
The distance at which a sound can be heard depends on its intensity, which is the average rate of flow of energy per unit area perpendicular to the direction of propagation. In the case of spherical waves spreading from a point source, the intensity varies inversely as the square of the distance, provided that no loss of energy is due to viscosity, heat conduction, or other absorption effects. Thus, in a perfectly homogeneous medium, a sound will be nine times as intense at a distance of 1 unit from its origin as at a distance of 3 units. In the actual propagation of sound through the atmosphere, changes in the physical properties of the air, such as temperature, pressure, and humidity, produce damping and scattering of the directed sound waves, so that the inverse-square law generally is not applicable in direct measurements of the intensity of sound.
If A above middle C is played on a violin, a piano, and a tuning fork, all at the same volume, the tones are identical in frequency and amplitude, but very different in quality. Of these three sources, the simplest tone is produced by the tuning fork, the sound in this case consisting almost entirely of vibrations having frequencies of 440 Hz. Because of the acoustical properties of the ear and the resonance properties of the ear’s vibrating membrane, however, it is doubtful whether a pure tone reaches the inner hearing mechanism in an unmodified form. The principal component of the note produced by the piano or violin also has a frequency of 440 Hz, but these notes also contain components with frequencies that are exact multiples of 440, called overtones, such as 880, 1320, and 1760. The exact intensity of these other components, which are called harmonics, determines the quality of the note.
The speed of propagation of sound in dry air at a temperature of 0° C (32° F) is 331.6 m/sec (1088 ft/sec). If the temperature is increased, the speed of sound increases; thus, at 20° C (68° F), the velocity of sound is 344 m/sec (1129 ft/sec). Changes in pressure at controlled density have virtually no effect on the speed of sound. The velocity of sound in many other gases depends only on their density. If the molecules are heavy, they move less readily, and sound progresses through such a medium more slowly. Thus, sound travels slightly faster in moist air than in dry air, because moist air contains a greater number of lighter molecules. The velocity of sound in most gases depends also on one other factor, the specific heat, which affects the propagation of sound waves.
Sound generally moves much faster in liquids and solids than in gases. In both liquids and solids, density has the same effect as in gases; that is, velocity varies inversely as the square root of the density. The velocity also varies directly as the square root of the elasticity. The speed of sound in water, for example, is slightly less than 1525 m/sec (5000 ft/sec) at ordinary temperatures but increases greatly with an increase in temperature. The speed of sound in copper is about 3353 m/sec (about 11,000 ft/sec) at ordinary temperatures and decreases as the temperature is increased (due to decreasing elasticity); in steel, which is more elastic, sound moves at a speed of about 4877 m/sec (about 16,000 ft/sec). Sound is propagated very efficiently in steel.
Sound moves forward in a straight line when traveling through a medium having uniform density. Like light, however, sound is subject to refraction, which bends sound waves from their original path. In Polar Regions, for example, where air close to the ground is colder than air that is somewhat higher, a rising sound wave entering the warmer region, in which sound moves with greater speed, is bent downward by refraction. The excellent reception of sound downwind and the poor reception upwind are also due to refraction. The velocity of wind is generally greater at an altitude of many meters than near the ground; a rising sound wave moving downwind is bent back toward the ground, whereas a similar sound wave moving upwind is bent upward over the head of the hearer.
Sound is also governed by reflection, obeying the fundamental law that the angle of incidence equals the angle of reflection. An echo is the result of reflection of sound. Sonar depends on the reflection of sounds propagated in water. A megaphone is a funnel-like tube that forms a beam of sound waves by reflecting some of the diverging rays from the sides of the tube. A similar tube can gather sound waves if the large end is pointed at the source of the sound; an ear trumpet is such a device.
Sound is also subject to diffraction and interference. If sound from a single source reaches a listener by two different paths—one direct and the other reflected—the two sounds may reinforce one another; but if they are out of phase they may interfere, so that the resultant sound is actually less intense than the direct sound without reflection. Interference paths are different for sounds of different frequencies, so that interference produces distortion in complex sounds. Two sounds of different frequencies may combine to produce a third sound, the frequency of which is equal to the sum or difference of the original two frequencies.
If the ear of an average young person is tested by an audiometer, it will be found to be sensitive to all sounds from 15 to 20 Hz to 15,000 or 20,000 Hz. The hearing of older persons is less acute, particularly to the higher frequencies. The degree to which a sensitive ear can distinguish between two pure notes of slightly different loudness or slightly different frequency varies in different ranges of loudness and frequency of the tones. A difference in loudness of about 20 percent (1 decibel, dB), and a difference in frequency of 1/3 percent (about 1/20 of a note) can be distinguished in sounds of moderate intensity at the frequencies to which the ear is most sensitive (about 1000 to 2000 Hz). In this same range, the difference between the softest sound that can be heard and the loudest sound that can be distinguished as sound (louder sounds are “felt,” or perceived, as painful stimuli) is about 120 dB (about 1 trillion times as loud).
All of these sensitivity tests refer to pure tones, such as those produced by an electronic oscillator. Even for such pure tones the ear is imperfect. Notes of identical frequency but differing greatly in intensity may seem to differ slightly in pitch. More important is the difference in apparent relative intensities with different frequencies. At high intensities the ear is approximately equally sensitive to most frequencies, but at low intensities the ear is much more sensitive to the middle high frequencies than to the lowest and highest. Thus, sound-reproducing equipment that is functioning perfectly will seem to fail to reproduce the lowest and highest notes if the volume is decreased.
In speech, music, and noise, pure tones are seldom heard. A musical note contains, in addition to a fundamental frequency, higher tones that are harmonics of the fundamental frequency. Speech contains a complex mixture of sounds, some (but not all) of which are in harmonic relation to one another. Noise consists of a mixture of many different frequencies within a certain range; it is thus comparable to white light, which consists of a mixture of light of all different colors. Different noises are distinguished by different distributions of energy in the various frequency ranges.
When a musical tone containing some harmonics of a fundamental tone, but missing other harmonics or the fundamental itself, is transmitted to the ear, the ear forms various beats in the form of sum and difference frequencies, thus producing the missing harmonics or the fundamental not present in the original sound. These notes are also harmonics of the original fundamental note. This incorrect response of the ear may be valuable. Sound-reproducing equipment without a large speaker, for example, cannot generally produce sounds of pitch lower than two octaves below middle C; nonetheless, a human ear listening to such equipment can resupply the fundamental note by resolving beat frequencies from its harmonics. Another imperfection of the ear in the presence of ordinary sounds is the inability to hear high-frequency notes when low-frequency sound of considerable intensity is present. This phenomenon is called masking.
In general, speech is understandable and musical themes can be satisfactorily understood if only the frequencies between 250 and 3000 Hz, the frequency range of ordinary telephones, are reproduced. For naturalness, however, the range of about 100 to 10,000 Hz must be reproduced. Sounds produced by a few musical instruments can be reproduced naturally only at somewhat lower frequencies, and a few noises can be reproduced at somewhat higher frequencies.
Beats occur when sources of nearly equal, but not quite, equal frequencies are made to vibrate simultaneously. The resulting sound shows a periodic variation in intensity. Thus the periodic waxing and waning of sound is known as beats.
A musical sound is produced by a series of similar impulses following each other regularly without sudden change in loudness. Musical sounds differ from each other in three respects-pitch, loudness and quality timbre.
A musical note is made up of the fundamental tone and its overtones or harmonies.Each musical instrument produces different overtones, and this specialty helps us to distinguish an instrument.
A Pitch is a physiological attribute and it distinguishes a grave notes from a sharp note. It is the frequency of the fundamental note.
Loudness is roughly the intensity of the sound.
Quality of a musical sound is determined by its harmonic content.The relative amplitudes of the harmonic components and their phase relations determine the quality of the musical sound.
A musical scale consists of notes whose fundamental frequencies have specified ratios.
Conventionally there are eight notes, which fix an octave.
Octave is the musical interval between two tones when the frequency of one is just twice that of the other.
The frequency ratio of the eight and the first note is 2:1.
The note of the lowest frequency is called the keynote.
Musical instruments such as piano and harmonium have many octaves.
Melody is produced when two notes produce concord, and are sounded one after another thus producing a pleasing effect.
Harmony is produced when they are sounded simultaneously.
Stringed instruments- in these, stretched strings are made to vibrate by (a) Striking with a hammer (piano), (b) Plucking with the finger (sitar), (c) Bowling with a bow (violin)
In wind instruments an air column of varying length is made to vibrate by blowing a jet of air at one end as in flute.In percussion instruments, like table, which has a circular stretched membrane (loaded with a paste of iron fillings and gum at its centre). It is made to vibrate up and down over a hollow chamber of air striking with palm & fingers. A mach number is a ratio of the speed of an object to the speed of sound. It is applicable only to objects, which move faster than speed of sound.
(Greek akouein, “t hear”), term sometimes used for the science of sound in general. It is more commonly used for the special branch of that science, architectural acoustics that deals with the construction of enclosed areas so as to enhance the hearing of speech or music. For the treatment of acoustics as a branch of the pure science of physics,
Acoustical design must take into consideration that in addition to physiological peculiarities of the ear, hearing is complicated by psychological peculiarities. For example, sounds that are unfamiliar seem unnatural. Sound produced in an ordinary room is somewhat modified by reverberations due to reflections from walls and furniture; for this reason, a broadcasting studio should have a normal degree of reverberation to ensure natural reproduction of sound. For best acoustic qualities, rooms are designed to produce sufficient reflections for naturalness, without introducing excessive reverberation at any frequency, without echoing certain frequencies unnaturally, and without producing undesirable interference effects or distortion.
For modifying the reverberations, the architect has two types of materials, sound-absorbent and sound-reflecting, to coat the surfaces of ceilings, walls, and floors. Soft materials such as cork and felt absorb most of the sound that strikes them, although they may reflect some of the low-frequency sounds. Hard materials such as stone and metals reflect most of the sound that strikes them.
Another important aspect of room acoustics is insulation from unwanted sound. This is obtained by carefully sealing even the smallest openings that can leak sound, by using massive walls, and by building several unconnected walls separated by dead spaces.
Magnetism
Poles are the regions in the body of a magnet where the power of attracting iron fillings is found to be maximum. The two poles are called as North Pole and the South Pole.
Law of magnetic forces states that like poles repel and unlike poles attract each other.
The substances, which are susceptible to magnetic influence, are called magnetic substances, e.g. iron, steel, nickel etc.
Magnetic substances are the three types:
The antimony, bismuth, zinc, gold and silver etc are diamagnetic substances.
A bar magnet when freely suspended comes to rest in the north-south direction, i.e., in the direction of earth’s magnetic field.
When a magnet is broken into two parts, each part becomes a complete magnet, i.e., each part possesses north and South Pole. It is impossible to obtain a magnet with only one pole.
A magnetic shell is a thin sheet of magnetic material of any shape, which so magnetized that one face shows north polarity and the other face shows south polarity.
Declination at the place is the angle between the geographical meridian and the magnetic meridian at that place.
Isogonic lines are lines of zero declination.
Isoclinic lines are lines joining places of equal dip or inclination:
Aclinic lines are lines joining places of zero dip.
Isodynamic lines are lines joining places having the equal horizontal component of earth’s magnetic field. The horizontal component only at the two poles where the dip needle remains perfectly parallel.
The strength of magnetic field is typically about 10 Gauss or Oersted. Thus the earth’s field is of the order of one Eersted.
The magnetic axis of earth does not coincide with the axis of rotation of the earth, and is “of” by an angle of about 20.Throughout the geological times, this angle has changed i.e. the earth’s magnetic axis has been shifting off and on. At present, the tip of the magnetic axis, corresponding to the earth’s magnetic south pole is located at a point in Northern Canada: 96W 70.5N.
The magnetic equator - a greater circle on the earth, perpendicular to the magnetic axis, passes through India in fact near Trivandrum. The magnetic field of the earth is due to molten charged metallic fluid giving rise to a current flowing inside the core of the earth. The earth’s magnetic field is like that of a giant bar magnet or dipole. It is only up to five times of radii, i.e. 32,000 kms. After 32,000 km the dipole field’s pattern gets severely disturbed because of the solar wind. A solar wind is a steady stream of hot charged particles put out by the sun.
Electricity
Electricity is divisible into two types : Static electricity, Current electricity
Electricity, which does not travel, or move is termed as static electricity. When a glass rod is rubbed with silk cloth, the rob acquires a positive charge and the silk an equal negative charge.
Like charges repel and unlike charges attract.
A List of objects given below, which acquire charges.
Positive charge
Negative charge
Glass rod , Woolen cloth
Silk cloth, Amber, ebonite, rubber, plastic
When a metal is charged with static electricity, it is found that charge resides entirely on the outside surface of a conductor. The earth acts merely as a very large storehouse of positive or negative charges.
If a car is struck by lightening, persons sitting inside are shielded from electricity so he may get burnt but will not be electrocuted.
If a charged body insulator is to keep its charge, It must be supported by something that does not allow electrons to pass freely along it. Suitable materials, such as glues, hard rubber, sulphur, porcelain, and mica are insulators.
A conductor is a substance such as metals, which allow charge carriers to pass easily.
Semi-conductors are substances having electrical conductivity less than good conductors but greater than that of the insulators. Examples are silicon, germanium, carbon and selenium.
Capacitance is the ability of a conductor to store more charge as it potential is raised.A capacitor consists of two flat metal plates separated by a thin sheet of insulating material.Capacitors are utilized in radio, TV, telephone circuits and many other electronic devices.
Lightning conductors: Lightning conductors are used to protect tall buildings from lightening damage.Lightning is a gigantic electric discharge occurring between two charged clouds or between two charged clouds or between a charged cloud and the earth.A lightning conductor’s one end is placed at the highest point of building and the lowest end is grounded.When charged clouds pass over the lightning conductors, it accepts any discharges, which may occur and pass it to the earth.
The flow of electric charge is electric current. The current flows with the speed of light. The maintain a continuous flow of current in circuit an electromotive force is provided by a source, which may be a cell or a generator. The amount of charge passing per unit time is called ‘current’. In alternating current (AC), the electrons flow alternately backward and forward.
Electric power is generally conveyed by AC, as it can be transformed in to very high voltage and transmitted over long distance with minimum power loss.
AC electricity can be transferred all over the country by high voltage overhead power lines (the grid) Electricity is generated in a typical power station at 11000 Volts and then stepped up to 132 K Volts by transformers.
It is fed into grid at this voltage and subsequently stepped down in successive stages to 33000 Volts and then to 6600 Volts at substations in the neighbour hood of towns. Factories etc get their power at 6600 Volts. For domestic consumers, the voltage is further stepped down to 220V.
We receive electricity in our homes through mains. One wire is live wire and another is neutral wire. Potential difference between two is 220V.Various domestic appliances in house connected to these live wires. Each appliance gets equal voltage, and is connected in parallel with each other. It ensures that when one is switched ‘on’ or ‘off’ others are not affected.
Electronics
Field of engineering and applied physics dealing with the design and application of devices, usually electronic circuits, the operation of which depends on the flow of electrons for the generation, transmission, reception, and storage of information. The information can consist of voice or music (audio signals) in a radio receiver, a picture on a television screen, or numbers and other data in a computer.
The introduction of vacuum tubes at the beginning of the 20th century was the starting point of the rapid growth of modern electronics. With vacuum tubes the manipulation of signals became possible, which could not be done with the early telegraph and telephone circuit or with the early transmitters using high-voltage sparks to create radio waves. For example, with vacuum tubes weak radio and audio signals could be amplified, and audio signals, such as music or voice, could be superimposed on radio waves. The development of a large variety of tubes designed for specialized functions made possible the swift progress of radio communication technology before World War II and the development of early computers during and shortly after the war.
The transistor, invented in 1948, has now almost completely replaced the vacuum tube in most of its applications. Incorporating an arrangement of semiconductor materials and electrical contacts, the transistor provides the same functions as the vacuum tube but at reduced cost, weight, and power consumption and with higher reliability. Subsequent advances in semiconductor technology, in part attributable to the intensity of research associated with the space-exploration effort, led to the development of the integrated circuit. Integrated circuits may contain hundreds of thousands of transistors on a small piece of material and allow the construction of complex electronic circuits, such as those in microcomputers, audio and video equipment, and communications satellites.
Electronic circuits consist of interconnections of electronic components. Components are classified into two categories—active or passive. Passive elements never supply more energy than they absorb; active elements can supply more energy than they absorb. Passive components include resistors, capacitors, and inductors. Components considered active include batteries, generators, vacuum tubes, and transistors.
A vacuum tube consists of an air-evacuated glass envelope that contains several metal electrodes. A simple, two-element tube (diode) consists of a cathode and an anode that is connected to the positive terminal of a power supply. The cathode—a small metal tube heated by a filament—frees electrons, which migrate to the anode—a metal cylinder around the cathode (also called the plate). If an alternating voltage is applied to the anode, electrons will only flow to the anode during the positive half-cycle; during the negative cycle of the alternating voltage, the anode repels the electrons, and no current passes through the tube. Diodes connected in such a way that only the positive half-cycles of an alternating current (AC) are permitted to pass are called rectifier tubes; these are used in the conversion of alternating current to direct current (DC) .
Transistors are made from semiconductors. These are materials, such as silicon or germanium, that are “doped” (have minute amounts of foreign elements added) so that either abundance or a lack of free electrons exists. In the former case, the semiconductor is called n-type, and in the latter case, p-type. By combining n-type and p-type materials, a diode can be produced. When this diode is connected to a battery so that the p-type material is positive and the n-type negative, electrons are repelled from the negative battery terminal and pass unimpeded to the p-region, which lacks electrons. With battery reversed, the electrons arriving in the p-material can pass only with difficulty to the n-material, which is already filled with free electrons, and the current is almost zero.
Most integrated circuits are small pieces, or “chips,” of silicon, perhaps 2 to 4 sq mm (0.08 to 0.15 sq in) long, in which transistors are fabricated. Photolithography enables the designer to create tens of thousands of transistors on a single chip by proper placement of the many n-type and p-type regions. These are interconnected with very small conducting paths during fabrication to produce complex special-purpose circuits. Such integrated circuits are called monolithic because they are fabricated on a single crystal of silicon. Chips require much less space and power and are cheaper to manufacture than an equivalent circuit built by employing individual transistors.
If a battery is connected across a conducting material, a certain amount of current will flow through the material. This current is dependent on the voltage of the battery, on the dimensions of the sample, and on the conductivity of the material itself. Resistors with known resistance are used for current control in electronic circuits. The resistors are made from carbon mixtures, metal films, or resistance wire and have two connecting wires attached. Variable resistors, with an adjustable sliding contact arm, are often used to control volume on radios and television sets.
Capacitors consist of two metal plates that are separated by an insulating material .If a battery is connected to both plates; an electric charge will flow for a short time and accumulate on each plate. If the battery is disconnected, the capacitor retains the charge and the voltage associated with it. Rapidly changing voltages, such as caused by an audio or radio signal, produce larger current flows to and from the plates; the capacitor then functions as a conductor for the changing current. This effect can be used, for example, to separate an audio or radio signal from a direct current in order to connect the output of one amplifier stage to the input of the next amplifier stage.
Inductors consist of a conducting wire wound into the form of a coil. When a current passes through the coil, a magnetic field is set up around it that tends to oppose rapid changes in current intensity .As a capacitor, an inductor can be used to distinguish between rapidly and slowly changing signals. When an inductor is used in conjunction with a capacitor, the voltage in the inductor reaches a maximal value for a specific frequency. This principle is used in a radio receiver, where a specific frequency is selected by a variable capacitor.
Measurements of mechanical, thermal, electrical, and chemical quantities are made by devices called sensors and transducers. The sensor is responsive to changes in the quantity to be measured, for example, temperature, position, or chemical concentration. The transducer converts such measurements into electrical signals, which, usually amplified, can be fed to instruments for the readout, recording, or control of the measured quantities. Sensors and transducers can operate at locations remote from the observer and in environments unsuitable or impractical for humans.
Some devices act as both sensor and transducer. A thermocouple has two junctions of wires of different metals; these generate a small electric voltage that depends on the temperature difference between the two junctions. A thermistor is a special resistor, the resistance of which varies with temperature. A variable resistor can convert mechanical movement into an electrical signal. Specially designed capacitors are used to measure distance, and photocells are used to detect light. Other devices are used to measure velocity, acceleration, or fluid flow. In most instances, the electric signal is weak and must be amplified by an electronic circuit.
Electronic amplifiers are used mainly to increase the voltage, current, or power of a signal.
Audio amplifiers, such as are found in radios, television sets, citizens band (CB) radios, and cassette recorders, are generally operated at frequencies below 20 kilohertz (1 kHz = 1000 cycles/sec). They amplify the electrical signal, which then is converted to sound in a loudspeaker. Operational amplifiers (op-amps), built with integrated circuits and consisting of DC-coupled, multistage, linear amplifiers are popular for audio amplifiers.
Video amplifiers are used mainly for signals with a frequency spectrum range up to 6 megahertz (1 MHz = 1 million cycles/sec). The signal handled by the amplifier becomes the visual information presented on the television screen, with the signal amplitude regulating the brightness of the spot forming the image on the screen. To achieve its function, a video amplifier must operate over a wide band and amplify all frequencies equally and with low distortion.
Radio Frequency Amplifiers boost the signal level of radio or television communication systems. Their frequencies generally range from 100 kHz to 1 GHz (1 billion cycles/sec = 1 gig hertz) and can extend well into the microwave frequency range.
The physical computer and its components are known as hardware. Computer hardware includes the memory that stores data and instructions; the central processing unit (CPU) that carries out instructions; the bus that connects the various computer components; the input devices, such as a keyboard or mouse, that allow the user to communicate with the computer; and the output devices, such as printers and video display monitors, that enable the computer to present information to the user. The programs that run the computer are called software. Software generally is designed to perform a particular type of task—for example, to control the arm of a robot to weld a car’s body, to draw a graph, or to direct the general operation of the computer.
When a computer is turned on it searches for instructions in its memory. Usually, the first set of these instructions is a special program called the operating system, which is the software that makes the computer work. It prompts the user (or other machines) for input and commands, reports the results of these commands and other operations, stores and manages data, and controls the sequence of the software and hardware actions. When the user requests that a program run, the operating system loads the program in the computer’s memory and runs the program. Popular operating systems, such as Windows 95 and the Macintosh operating system, have a graphical user interface (GUI)—that is, a display that uses tiny pictures, or icons, to represent various commands. To execute these commands, the user clicks the mouse on the icon or presses a combination of keys on the keyboard.
Information from an input device or memory is communicated via the bus to the CPU, which is the part of the computer that translates commands and runs programs. The CPU is a microprocessor chip—that is, a single piece of silicon containing millions of electrical components. Information is stored in a CPU memory location called a register. Registers can be thought of as the CPU’s tiny scratchpad, temporarily storing instructions or data. When a program is run, one register called the program counter keeps track of which program instruction comes next. The CPU’s control unit coordinates and times the CPU’s functions, and it retrieves the next instruction from memory.
In a typical sequence, the CPU locates the next instruction in the appropriate memory device. The instruction then travels along the bus from the computer’s memory to the CPU, where it is stored in a special instruction register. Meanwhile, the program counter is incremented to prepare for the next instruction. The current instruction is analyzed by a decoder, which determines what the instruction will do. Any data the instruction needs are retrieved via the bus and placed in the CPU’s registers. The CPU executes the instruction, and the results are stored in another register or copied to specific memory locations.
The bus is usually a flat cable with numerous parallel wires. The bus enables the components in a computer, such as the CPU and memory, to communicate. Typically, several bits at a time are sent along the bus. For example, a 16-bit bus, with 16 parallel wires, allows the simultaneous transmission of 16 bits (2 bytes) of information from one device to another.
Input devices, such as a keyboard or mouse, permit the computer user to communicate with the computer. Other input devices include a joystick, a rod like device often used by game players; a scanner, which converts images such as photographs into binary information that the computer can manipulate; a light pen, which can draw on, or select objects from, a computer’s video display by pressing the pen against the display’s surface; a touch panel, which senses the placement of a user’s finger; and a microphone, used to gather sound information.
Once the CPU has executed the program instruction, the program may request that information be communicated to an output device, such as a video display monitor or a flat liquid crystal display. Other output devices are printers, overhead projectors, videocassette recorders (VCRs), and speakers.
Programming languages contain the series of commands that create software. In general, a language that is encoded in binary numbers or a language similar to binary numbers that a computer’s hardware understands is understood more quickly by the computer. A program written in this type of language also runs faster. Languages that use words or other commands that reflect how humans think are easier for programmers to use, but they are slower because the language must be translated first so the computer can understand it.
Computer programs that can be run by a computer’s operating system are called executables. An executable program is a sequence of extremely simple instructions known as machine code. These instructions are specific to the individual computer’s CPU and associated hardware; for example, Intel Pentium and Power PC microprocessor chips each have different machine languages and require different sets of codes to perform the same task. Machine code instructions are few in number (roughly 20 to 200, depending on the computer and the CPU). Typical instructions are for copying data from a memory location or for adding the contents of two memory locations (usually registers in the CPU). Machine code instructions are binary—that is, sequences of bits (0s and 1s). Because these numbers are not understood easily by humans, computer instructions usually are not written in machine code.
Assembly language uses commands that are easier for programmers to understand than are machine-language commands. Each machine language instruction has an equivalent command in assembly language. Once an assembly-language program is written, it is converted to a machine-language program by another program called an assembler. Assembly language is fast and powerful because of its correspondence with machine language. It is still difficult to use, however, because assembly-language instructions are a series of abstract codes. In addition, different CPUs use different machine languages and therefore require different assembly languages. Assembly language is sometimes inserted into a higher-level language program to carry out specific hardware tasks or to speed up a higher-level program.
Higher-level languages are easier to use than machine and assembly languages because their commands resemble natural human language. In addition, these languages are not CPU-specific. Instead, they contain general commands that work on different CPUs.
This command directs the computer’s CPU to display the greeting, and it will work no matter what type of CPU the computer uses. Like assembly language instructions, higher-level languages also must be translated, but a compiler is used. A compiler turns a higher-level program into a CPU-specific machine language.
American naval officer and mathematician Grace Murray Hopper helped develop the first commercially available higher-level software language, FLOW-MATIC, in 1957.
Object-oriented programming (OOP) languages like C++ are based on traditional higher-level languages, but they enable a programmer to think in terms of collections of cooperating objects instead of lists of commands. Objects, such as a circle, have properties such as the radius of the circle and the command that draws it on the computer screen. Classes of objects can inherit features from other classes of objects. For example, a class defining squares can inherit features such as right angles from a class defining rectangles. This set of programming classes simplifies the programmer’s task, resulting in more reliable and efficient programs.
Computers exist in a wide range of sizes and power. The smallest are embedded within the circuitry of appliances, such as televisions and wrist watches. These computers are typically preprogrammed for a specific task, such as tuning to a particular television frequency or keeping accurate time.
Programmable computers vary enormously in their computational power, speed, memory, and physical size. The smallest of these computers can be held in one hand and are called personal digital assistants (PDAs). They are used as notepads, scheduling systems, and address books; if equipped with a cellular phone, they can connect to worldwide computer networks to exchange information regardless of location.
Laptop computers and PCs are typically used in businesses and at home to communicate on computer networks, for word processing, to track finances, and to play games. They have large amounts of internal memory to store hundreds of programs and documents. They are equipped with a keyboard; a mouse, trackball, or other pointing device; and a video display monitor or liquid crystal display (LCD) to display information. Laptop computers usually have similar hardware and software as PCs, but they are more compact and have flat, lightweight LCDs instead of video display monitors.
Workstations are similar to personal computers but have greater memory and more extensive mathematical abilities, and they are connected to other workstations or personal computers to exchange data. They are typically found in scientific, industrial, and business environments that require high levels of computational abilities.
Mainframe computers have more memory, speed, and capabilities than workstations and are usually shared by multiple users through a series of interconnected computers. They control businesses and industrial facilities and are used for scientific research. The most powerful mainframe computers, called supercomputers, process complex and time-consuming calculations, such as those used to create weather predictions. They are used by the largest businesses, scientific institutions, and the military. Some supercomputers have many sets of CPUs. These computers break a task into small pieces, and each CPU processes a portion of the task to increase overall speed and efficiency. Such computers are called parallel processors.
Computers can communicate with other computers through a series of connections and associated hardware called a network. The advantage of a network is that data can be exchanged rapidly, and software and hardware resources, such as hard-disk space or printers, can be shared.
One type of network, a local area network (LAN), consists of several PCs or workstations connected to a special computer called the server. The server stores and manages programs and data. A server often contains all of a networked group’s data and enables LAN workstations to be set up without storage capabilities to reduce cost.
Mainframe computers and supercomputers commonly are networked. They may be connected to PCs, workstations, or terminals that have no computational abilities of their own. These “dumb” terminals are used only to enter data into, or receive output from, the central computer.
Wide area networks (WANs) are networks that span large geographical areas. Computers can connect to these networks to use facilities in another city or country. For example, a person in Los Angeles can browse through the computerized archives of the Library of Congress in Washington, D.C. The largest WAN is the Internet, a global consortium of networks linked by common communication programs. The Internet is a mammoth resource of data, programs, and utilities. It was created mostly by American computer scientist Vinton Cerf in 1973 as part of the United States Department of Defense Advanced Research Projects Agency (DARPA). In 1984 the development of Internet technology was turned over to private, government, and scientific agencies. The World Wide Web is a system of information resources accessed primarily through the Internet. Users can obtain a variety of information in the form of text, graphics, sounds, or animations. These data are extensively cross-indexed, enabling users to browse (transfer from one information site to another) via buttons, highlighted text, or sophisticated searching software known as search engines.
Light
Light, form of energy visible to the human eye that is radiated by moving charged particles. Light from the sun provides the energy needed for plant growth and plants convert the energy in sunlight into storable chemical form through a process called photosynthesis.
Scientists have learned through experimentation that light behaves like a particle at times, and like a wave at other times. The particlelike features are called photons discovered by Max Planck in 1900. Photons are different from particles of matter in that they have no mass and always move at the constant speed of 300,000 km/sec (186,000 mi/sec). When light diffracts, or bends slightly as it passes around a corner, it shows wavelike behavior. The waves associated with light are called electromagnetic waves because they consist of changing electric and magnetic fields.
Light can be emitted, or radiated, by electrons circling the nucleus of their atom. Electrons can circle atoms only in certain patterns called orbitals, and electrons have a specific amount of energy in each orbital. The amount of energy needed for each orbital is called an energy level of the atom. Electrons that circle close to the nucleus have less energy than electrons in orbitals farther from the nucleus. If the electron is in the lowest energy level, then no radiation occurs despite the motion of the electron. If an electron in a lower energy level gains some energy, it must jump to a higher level, and the atom is said to be excited. The motion of the excited electron causes it to lose energy, and it falls back to a lower level. The energy the electron releases is equal to the difference between the higher and lower energy levels. The electron may emit this quantum of energy in the form of a photon.
Each atom has a unique set of energy levels, and the energies of the corresponding photons it can emit make up what is called the atom’s spectrum. This spectrum is like a fingerprint by which the atom can be identified. The process of identifying a substance from its spectrum is called spectroscopy. The laws that describe the orbitals and energy levels of atoms are the laws of quantum theory. They were invented in the 1920s specifically to account for the radiation of light and the sizes of atoms.
The waves that accompany light are made up of oscillating, or vibrating, electric and magnetic fields, which are force fields that surround charged particles and influence other charged particles in their vicinity. These electric and magnetic fields change strength and direction at right angles, or perpendicularly, to each other in a plane (vertically and horizontally for instance). The electromagnetic wave formed by these fields travels in a direction perpendicular to the field’s strength (coming out of the plane).
Light does not need a medium, or substance, through which to travel. Light from the sun and distant stars reaches the earth by traveling through the vacuum of space.The waves associated with natural sources of light are irregular, like the water waves in a busy harbor. Scientists think of such waves as being made up of many smooth waves, where the motion is regular and the wave stretches out indefinitely with regularly spaced peaks and valleys. Such regular waves are called monochromatic because they correspond to a single color of light.
The wavelength of a monochromatic wave is the distance between two consecutive wave peaks. Wavelengths of visible light can be measured in meters or in nanometers (nm), which are one billionth of a meter (or about 0.4 ten-millionths of an inch). Frequency corresponds to the number of wavelengths that pass by a certain point in space in a given amount of time. This value is usually measured in cycles per second, or Hertz (Hz). All electromagnetic waves travel at the same speed, so in one second, more short waves will pass by a point in space than will long waves. This means that shorter waves have a higher frequency than longer waves. The relationship between wavelength, speed, and frequency is expressed by the equation: wave speed equals wavelength times frequency, or
c = lf
where c is the speed of a light wave in m/sec (3x108 m/sec in a vacuum), l is the wavelength in meters, and f is the wave’s frequency in Hz.
The amplitude of an electromagnetic wave is the height of the wave, measured from a point midway between a peak and a trough to the peak of the wave. This height corresponds to the maximum strength of the electric and magnetic fields and to the number of photons in the light.
Polarization refers to the direction of the electric field in an electromagnetic wave. A wave whose electric field is oscillating in the vertical direction is said to be polarized in the vertical direction. The photons of such a wave would interact with matter differently than the photons of a wave polarized in the horizontal direction. The electric field in light waves from the sun vibrates in all directions; so direct sunlight is called unpolarized. Sunlight reflected from a surface is partially polarized parallel to the surface. Polaroid sunglasses block light that is horizontally polarized and therefore reduce glare from sunlight reflecting off horizontal surfaces.
Sources of light differ in how they provide energy to the charged particles, such as electrons, whose motion creates the light. If the energy comes from heat, then the source is called incandescent. If the energy comes from another source, such as chemical or electrical energy, the source is called luminescent.
In an incandescent light source, hot atoms collide with each other. These collisions transfer energy to some electrons, boosting them into higher energy levels. As the electrons release this energy, they emit photons. Some collisions are weak and some are strong, so the electrons are excited to different energy levels and photons of different energies are emitted. Candlelight is incandescent and results from the excited atoms of soot in the hot flame. Light from an incandescent light bulb comes from excited atoms in a thin wire called a filament that is heated by passing an electric current through it.
The sun is an incandescent light source, and its heat comes from nuclear reactions deep below its surface. The color of incandescent sources is related to their temperature, with hotter sources having more blue in their spectra, or ranges of photon energies, and cooler sources more red.
A luminescent light source absorbs energy in some form other than heat, and is therefore usually cooler than an incandescent source. The color of a luminescent source is not related to its temperature. A fluorescent light is a type of luminescent source that makes use of chemical compounds called phosphors. Fluorescent light tubes are filled with mercury vapor and coated on the inside with phosphors. As electricity passes through the tube, it excites the mercury atoms and makes them emit blue, green, violet, and ultraviolet light. The electrons in phosphor atoms absorb the ultraviolet radiation, and then release some energy to heat before emitting visible light with a lower frequency.
Phosphor compounds are also used to convert electron energy to light in a television picture tube. Beams of electrons in the tube collide with phosphor atoms in small dots on the screen, exciting the phosphor electrons to higher energy levels. As the electrons drop back to their original energy level, they emit some heat and visible light. The light from all the phosphor dots combines to form the picture.
In certain phosphor compounds, atoms remain excited for a long time before radiating light. A light source is called phosphorescent if the delay between energy absorption and emission is longer than one second. Phosphorescent materials can glow in the dark for several minutes after they have been exposed to strong light.
The aurora borealis and aurora australis (northern and southern lights) in the night sky in high latitudes are luminescent sources. Electrons in the solar wind that sweeps out from the sun become deflected in the earth’s magnetic field and dip into the upper atmosphere near the north and south magnetic poles. The electrons then collide with atmospheric molecules, exciting the molecules’ electrons and making them emit light in the sky.
Chemiluminescence occurs when a chemical reaction produces molecules with electrons in excited energy levels that can then radiate light. The color of the light depends on the chemical reaction. When chemiluminescence occurs in plants or animals it is called bioluminescence.
The first successful theory of light wave motion in three dimensions was proposed by the Dutch scientist Christiaan Huygens in 1678. Huygens suggested that light wave peaks form surfaces like the layers of an onion. In a vacuum, or a uniform material, the surfaces are spherical. These wave surfaces advance, or spread out, through space at the speed of light.
Interference in waves occurs when two waves overlap. If a peak of one wave is aligned with the peak of the second wave, then the two waves will produce a larger wave with a peak that is the sum of the two overlapping peaks. This is called constructive interference. If a peak of one wave is aligned with a trough of the other, then the waves will tend to cancel each other out and they will produce a smaller wave or no wave at all. This is called destructive interference.
Instruments called interferometers use various arrangements of reflectors to produce two beams of light, which are allowed to interfere. These instruments can be used to measure tiny differences in distance or in the speed of light in one of the beams by observing the interference pattern produced by the two beams.
Holography is another application of interference. A hologram is made by splitting a light wave in two with a partially reflecting mirror. One part of the light wave travels through the mirror and is sent directly to a photographic plate. The other part of the wave is reflected first toward a subject, a face for example, and then toward the plate. The resulting photograph is a hologram.
Diffraction is the spreading of light waves as they pass through a small opening or around a boundary. As a beam of light emerges from a slit in an illuminated screen, the light some distance away from the screen will consist of overlapping wavelets from different points of the light wave in the opening of the slit. When the light strikes a spot on a display screen across from the slit, these points are at different distances from the spot, so their wavelets can interfere and lead to a pattern of light and dark regions. The pattern produced by light from a single slit will not be as pronounced as a pattern from two slits. This is because there is an infinite number of interfering waves, one from each point emerging from the slit, and their interference patterns overlap each other.
Monochromatic light, or light of one color, has several characteristics that can be measured. As discussed in the section on electromagnetic waves, the length of light waves is measured in meters, and the frequency of light waves is measured in cycles per second, or Hertz. The wavelength can be measured with interferometers, and the frequency determined from the wavelength and a measurement of the velocity of light in meters per second. Monochromatic light also has a well-defined polarization that can be measured using devices called polarimeters. Sometimes the direction of scattered light is also an important quantity to measure.
When light is considered as a source of illumination for human eyes, its intensity, or brightness, is measured in units that are based on a modernized version of the perceived brightness of a candle. These units include the rate of energy flow in light, which, for monochromatic light traveling in a single direction, is determined by the rate of flow of photons. The rate of energy flow in this case can be stated in watts, or Joules per second. Usually light contains many colors and radiates in many directions away from a source such as a lamp.
Scientists use the units candela and lumen to measure the brightness of light as perceived by humans. These units account for the different response of the eye to light of different colors. The lumen measures the total amount of energy in the light radiated in all directions, and the candela measures the amount radiated in a particular direction.
The lumen can be defined in terms of a source that radiates one candela uniformly in all directions. If a sphere with a radius of one foot were centered on the light source, then one square foot of the inside surface of the sphere would be illuminated with a flux of one lumen. Flux means the rate at which light energy is falling on the surface. The illumination, or luminance, of that one square foot is defined to be one foot-candle.
Scientists have defined the speed of light to be exactly 299,792,458 meters per second (about 186,000 miles per second). This definition is possible because since 1983, scientists have known the distance light travels in one second more accurately than the definition of the standard meter. Therefore, in 1983, scientists defined the meter as 1/299,792,458 the distance light travels in one second. This precise measurement is the latest step in a long history of measurement, beginning in the early 1600s with an unsuccessful attempt by Italian scientist Galileo to measure the speed of lantern light from one hilltop to another.
Facts about light
Light behaves more like a stream of very small particles of energy known as photons.
The speed of light is nearly 300,000 km per second.
The light travels in a straight line.
The light rays are reflected in a regular and orderly way through mirror.
Metals are the best mirror surfaces. The electrons in the metal are able to vibrate with the incoming light ways and reradiates the energy they absorb.
The first successful measurements of the speed of light were astronomical. In 1676 the Danish astronomer Olaus Roemer noticed a delay in the eclipse of a moon of Jupiter when it was viewed from the far side as compared with the near side of earth’s orbit. Assuming the delay was the travel time of light across the earth’s orbit, and knowing roughly the orbital size from other observations, he divided distance by time to estimate the speed.
The paradox of the constancy of the speed of light created a major problem for physical theory that German-born American physicist Albert Einstein finally resolved in 1905. Einstein suggested that physical theories should not depend on the state of motion of the observer. Instead, Einstein said the speed of light had to remain constant, and all the rest of physics had to be changed to be consistent with this fact. This special theory of relativity predicted many unexpected physical consequences, all of which have since been observed in nature.
Our household mirrors are made up of plain glass painted with an amalgam of tin foil and mercury on the back side. The painted part acts as a mirror surface.Precision mirrors use a thin layer of aluminium such mirrors reflects up to 96 per cent of light that falls upon it.
Plane mirrors forms image of the same size as the object, and at the same distance behind the mirrors as the object is in the front. The number of images formed by two mirror inclined at angle will be infinite.
Plane mirrors have following applications.
Pepper’s Ghost is a method of producing the illusion of a ghost on a stage.
These mirrors are obtained by polishing of a spherical surface with silver or any other material. These mirrors are of two types convex and concave mirror
Concave mirror gives an erect, magnified image, and is utilised as doctor’s mirror, and in a solar cooker, where it to focuses the sun on the object.
Convex mirror forms small images and so give a wide range of view. So they are used in automobiles to provide the images of the vehicles approaching it. The convex mirrors are also used in the street lamps
Hemispherical mirrors reflect the rays, which intersect each other to form a surface called a caustic.
Parabolic mirror is a concave mirror, whose section is the shape of a parabola. A parabolic mirror reflects a wide beam of light from a lamp at its focus as a perfect parallel beam. It is utilised as reflectors in searchlights, heat lights of motor vehicles and reflecting astronomical telescopes.
For a thin double convex lens, all parallel rays will be focused to a point referred to as the principal focal point. The distance from the lens to that point is the principal focal length f of the lens. For a double concave lens where the rays are diverged, the principal focal length is the distance at which the back-projected rays would come together and it is given a negative sign. The Lens strength in diopters is defined as the inverse of the focal length in meters. For a thick lens made from spherical surfaces, the focal distance will differ for different rays, and this change is called Spherical abbretion. The focal length for different wavelengths will also differ slightly, and this is called Chromatic abbretion.
The principal focal length of a lens is determined by the index of refraction of the glass, the radii of curvature of the surfaces, and the medium in which the lens resides. It can be calculated from the Lens makers formula for thin lenses.
Camera is a light tight box with lens at one end. Which forms an image of an object on light sensitive film at the other end. In auto-focus cameras, the lens focus is fixed, so they can take sharp pictures at the distance between 3-9 feets. Lens aperture and shutter speed are pre-set to give correct exposure outdoors in sunshine and indoors with flash. 35-mm cameras are compact, and utilizes a range of lenses, attachments and accessories. 35 mm cameras utilise a perforated film 35 mm wide, in cassettes holding 20 or 36 exposures. The normal image size is 24 x 36 mm.
A Polaroid camera produces a finished print directly. Jellies like developing chemicals are stored in pods, attached to the film. It instantly processes each picture as it is taken out from the camera. The length of the exposure is determined by the intensity of light available, the film speed, and the aperture of the lens. The cine cameras the shutter automatically opens as the film comes to rest behind the lens for each frame; the film passes through the camera so that a set number of frames (commonly 16, 18 or 24) or frames exposed per second.
A virtual image is formed at the position where the paths of the principal rays cross when projected backward from their paths beyond the lens. Although a virtual image does not form a visible projection on a screen, it is no sense "imaginary", i.e., it has a definite position and size and can be "seen" or imaged by the eye, camera, or other optical instrument.
A reduced virtual image if formed by a single negative lens regardless of the object position. An enlarged virtual image can be formed by a positive lens by placing the object inside the principal focal point.
A magnifying glass is a converging lens, and its enlarges an object when put at lens’s principal focus. The eyepiece of a telescope or microscope is a high-quality simple magnifier.
The compound microscope utilizes an objective and eyepiece and gives high magnifications.
A telescope provides angular magnification of a distant object. It produces an effect of enlargement and closeness of the object to the eye.
The astronomical telescope makes use of two positive lenses: the objective, which forms the image of a distant object at its focal length, and the eyepiece, which acts as a simple magnifier with which to view the image formed by the objective. Its length is equal to the sum of the focal lengths of the objective and eyepiece, and its angular magnification is -fo /fe , giving an inverted image.
A compound microscope uses a very short focal length objective lens to form a greatly enlarged image. This image is then viewed with a short focal length eyepiece used as a simple magnifier. The image should be formed at infinity to minimize eyestrain.
The general assumption is that the length of the tube L is large compared to either f(o) or f(e) so that the following relationships hold.
Humans in Space
Space is a hostile environment for humans in a number of ways. It contains neither air nor oxygen, so human beings are unable to breathe. The vacuum of space can destroy an unprotected human body in a few seconds by explosive decompression. Temperatures in space in the shadow of a planet approach absolute zero; on the other hand, temperatures can become fatally high under direct solar radiation. Energetic solar and cosmic radiations in space may also be fatal to an unshielded person who is not protected by the atmosphere of the earth. These environmental conditions can also affect the instruments and devices used in spacecraft, so the design and construction of these materials are dictated by the space environment. Experiments in weightlessness for long periods of time have been studied intensively to discover what adverse effects this condition will have on humans in space.
Humans can be protected against the space environment in several ways. At present, they are enclosed inside a hermetically sealed cabin or space suit, with a supply of pressurized air or oxygen to approximate conditions on earth. Air conditioning controls the temperature and humidity inside the cabin or space suit. Absorbing and reflecting surfaces on the outside of the spacecraft regulate the amount of heat radiation affecting the craft. Furthermore, space journeys are carefully planned to avoid the intense radiation belts around the earth. On long interplanetary voyages of the future, heavy shielding might be necessary to protect against solar radiation storms; or crews might be sheltered in a central position within the spacecraft with supplies and equipment to surround and shield them. For lengthy space journeys, or for prolonged stays in an earth-orbiting satellite, the effects of weightlessness might be reduced by spinning the craft so that the centrifugal effect provides artificial gravity. For this purpose, the spacecraft might be shaped like a large wheel that spins slowly around its own axis, or it might be built like a dumbbell, both ends of which rotate around the center of gravity of the dumbbell.
In the early 17th century the German astronomer Johannes Kepler wrote Somnium (Sleep), which might be called a scientific satire of a journey to the moon.
Polish astronomer Nicolaus Copernicus systematically explained that the planets, including the earth, revolve about the sun. Later in the 16th century the observations of the Danish astronomer Tycho Brahe greatly influenced the laws of planetary motion set forth by German astronomer Johannes Kepler. Italian scientist Galileo and British scientists Edmund Halley, Sir William Herschel, and Sir James Jeans were other astronomers who made contributions pertinent to astronautics.
Spacecraft that do not have to carry humans can be of a great variety of sizes, from a few centimeters to several meters in diameter, and many shapes, depending on the purposes for which they are designed. Spacecraft that do not carry a crew have radio-transmitting equipment; both to relay information back to earth and to signal the position of the spacecraft.
Manned spacecraft must fulfill more exacting requirements than unmanned vehicles because of the needs of the human occupants. A manned space vehicle is designed to provide air for the astronauts, food and water, navigation and guidance equipment, seating and sleeping accommodations, and communication equipment so the astronauts can send and receive information from the control center on earth. A distinctive feature of manned spacecraft is the heat shield that protects the vehicle as it reenters the atmosphere.
The orbit of a spacecraft around the earth can be in the shape of a circle or an ellipse. A satellite in a circular orbit travels at a constant speed. The higher the altitude, however, the lower the speed relative to the surface of the earth. Maintaining an altitude of 35,800 km (22,300 mi) over the equator, a satellite is Geostationary. It moves in geosynchronous orbit, at exactly the same speed as the earth, so it remains in a fixed position over some particular spot on the equator. Most communications satellites are placed in such orbits.
In an elliptical orbit, the speed varies and is greatest at perigee (minimum altitude) and least at apogee (maximum altitude). Elliptical orbits can lie in any plane that passes through the earth’s center. The angle between the orbital plane and the equatorial plane is called the inclination of the orbit.
The earth rotates once every 24 hours under a satellite in a polar orbit. A polar-orbit weather satellite, carrying television and infrared cameras, can thus observe meteorological conditions over the entire globe from pole to pole in a single day. An orbit at another inclination covers a smaller portion of the earth, omitting areas around the poles.
As long as the orbit of an object keeps it in the vacuum of space, the object will continue to orbit without propulsive power because no frictional force slows it down. If part or all of the orbit passes through the atmosphere of the earth, however, the body is slowed by aerodynamic friction with the air. This causes the orbit to decay gradually to lower and lower altitudes until the object has fully reentered the atmosphere and burns up, like a meteor.
Earliest Space Programs
First artificial orbiting earth satellite, Sputnik 1, by the USSR on October 4, 1957. Sputnik Zemli, meaning “traveling companion of the world” is the Russian name for an artificial satellite,
The second artificial earth satellite was also a Soviet space vehicle, called Sputnik 2. It was sent aloft on November 3, 1957,
the United States successfully launched its first earth satellite, Explorer 1, from Cape Canaveral (named Cape Kennedy 1963-1973), Florida, on January 31, 1958.
On March 17, 1958, the United States launched its second satellite, Vanguard 2; a precise study of variations of its orbit showed that the earth is slightly pear-shaped.
Using solar power, the satellite transmitted signals for more than six years. Vanguard 2 was followed by the American satellite Explorer 3, launched on March 26, 1958,
the Soviet satellite Sputnik 3, launched on May 15.
This class of unmanned spacecraft also performs useful functions for the earthbound scientist. The three general classifications of such satellites are communications, environmental, and navigation satellites.
Communications satellites provide communication over long distances by reflecting or relaying radio signals between places on earth or between satellites. There are currently hundreds of communications satellites in orbit around the earth, providing transmission of television signals, telephone conversations, and digital data.
Environmental satellites observe the earth and atmosphere, and transmit images for a variety of purposes. Weather satellites provide daily transmissions of temperatures and cloud patterns.
Earth observation satellites are used to obtain photographs of military value, such as detection of nuclear explosions in the atmosphere and in space, ballistic-missile launch sites, and ship and troop movements.
Navigation satellites provide a known observation point orbiting the earth that, when observed by ships and submarines, can fix the vessel’s position within a few yards.
Navstar family of satellites, belonging to the US navy forms the basis for the Global Positioning System, is gaining wider commercial use.Other countries are also planning their own satellite navigation system like Galileo of the European countries.
Outer space multilateral treaties
(1) the 1967 Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies;
(2) the 1968 Agreement on the Rescue of Astronauts, the Return of Astronauts and the Return of Objects Launched into Outer Space;
(3) the 1972 Convention on International Liability for Damage Caused by Space Objects;
(4) the 1975 Convention on Registration of Objects Launched into Outer Space; and
(5) the 1979 Agreement Governing the Activities of States on the Moon and Other Celestial Bodies. The last one proclaims the Moon and other celestial bodies within the solar system, other than the Earth, together with their natural resources, the common heritage of mankind. All five treaties are in force among their respective contracting parties, but the most important of these are doubtless the treaties on principles and on liability. The latter lays down detailed rules governing the recovery of damages for losses caused by space objects.
[1] See everyday physics
By: Abhipedia ProfileResourcesReport error
Access to prime resources
New Courses