2015-04-30

Instance of using SI - Human body temperature



It is known to all that human is endotherm. which means that human can control the body temperature at a steady level. But what is the normal body temperature of human? Sometimes we will have a fever. But what is the definition of it? In this post we will introduce these. (PS: The main unit in this post is °C, and with K as comment. So u can see how to use SI temperature units correctly.)

Normal human body temperature, also known as normothermia or euthermia, depends upon the place in the body at which the measurement is made, the time of day, as well as the activity level of the person. Nevertheless, commonly mentioned typical values are:

Oral (under the tongue): 36.8 ± 0.4 °C (309.95 ± 0.4 K)
Internal (rectal, vaginal): 37.0 °C (310.15 K)

Different parts of the body have different temperatures. Rectal and vaginal measurements taken directly inside the body cavity are typically slightly higher than oral measurements, and oral measurements are somewhat higher than skin measurements. Other places, such as under the arm or in the ear, produce different typical temperatures. Although some people think of these averages as representing the normal or ideal temperature, a wide range of temperatures has been found in healthy people.

The body temperature of a healthy person varies during the day by about 0.5 °C (0.5 K) with lower temperatures in the morning and higher temperatures in the late afternoon and evening, as the body's needs and activities change. Other circumstances also affect the body's temperature. The core body temperature of an individual tends to have the lowest value in the second half of the sleep cycle; the lowest point, called the nadir, is one of the primary markers for circadian rhythms. The body temperature also changes when a person is hungry, sleepy, or cold.

Methods of measurement

Taking a person's temperature is an initial part of a full clinical examination. There are various types of medical thermometers, as well as sites used for measurement, including:

  • In the anus (rectal temperature)
  • In the mouth (oral temperature)
  • Under the arm (axillary temperature)
  • In the ear (tympanic temperature)
  • In the vagina (vaginal temperature)
  • In the bladder
  • On the skin of the forehead over the temporal artery
 The range of a medical thermometer is 35~42 °C, with the accuracy of 0.1 °C.


Variations

Temperature control (thermoregulation) is part of a homeostatic mechanism that keeps the organism at optimum operating temperature, as it affects the rate of chemical reactions. In humans, the average internal temperature is 37.0 °C (310.15 K), though it varies among individuals. However, no person always has exactly the same temperature at every moment of the day. Temperatures cycle regularly up and down through the day, as controlled by the person's circadian rhythm. The lowest temperature occurs about two hours before the person normally wakes up. Additionally, temperatures change according to activities and external factors.

In addition to varying throughout the day, normal body temperature may also differ as much as 0.5 °C (0.5 K) from one day to the next, so that the highest or lowest temperatures on one day will not always exactly match the highest or lowest temperatures on the next day.

Normal human body temperature varies slightly from person to person and by the time of day. Consequently, each type of measurement has a range of normal temperatures. The range for normal human body temperatures, taken orally, is 36.8±0.5 °C (309.95±0.5 K). This means that any oral temperature between 36.3 and 37.3 °C (309.45 and 310.45 K) is likely to be normal. The normal human body temperature is often stated as 36.5~37.5 °C (309.65~310.65 K).

Natural rhythms

Body temperature normally fluctuates over the day, with the lowest levels around 04:00. and the highest in the late afternoon, between 16:00 and 18:00 (assuming the person sleeps at night and stays awake during the day). Therefore, an oral temperature of 37.3 °C (310.45 K) would, strictly speaking, be a normal, healthy temperature in the afternoon but not in the early morning. An individual's body temperature typically changes by about 0.5 °C (0.5 K) between its highest and lowest points each day.

Body temperature is sensitive to many hormones, so women have a temperature rhythm that varies with the menstrual cycle, called a circamensal rhythm. A woman's basal body temperature rises sharply after ovulation, as estrogen production decreases and progesterone increases. Fertility awareness programs use this predictable change to identify when a woman can become pregnant. During the luteal phase of the menstrual cycle, both the lowest and the average temperatures are slightly higher than during other parts of the cycle. However, the amount that the temperature rises during each day is slightly lower than typical, so the highest temperature of the day is not very much higher than usual. Hormonal contraceptives both suppress the circamensal rhythm and raise the typical body temperature by about 0.6 °C (0.6 K).

Temperature also varies with the change of seasons during each year. This pattern is called a circannual rhythm. Studies of seasonal variations have produced inconsistent results. People living in different climates may have different seasonal patterns.

Increased physical fitness increases the amount of daily variation in temperature.

With increased age, both average body temperature and the amount of daily variability in the body temperature tend to decrease. Elderly patients may have a decreased ability to generate body heat during a fever, so even a somewhat elevated temperature can indicate a serious underlying cause in geriatrics.

Measurement methods

Different methods used for measuring temperature produce different results. The temperature reading depends on which part of the body is being measured. The typical daytime temperatures among healthy adults are as follows:
·         Temperature in the anus (rectum/rectal), vagina, or in the ear (otic) is about 37.5 °C (310.65 K)
·         Temperature in the mouth (oral) is about 36.8 °C (309.95 K)
·         Temperature under the arm (axillary) is about 36.5 °C (309.65 K)

Generally, oral, rectal, gut, and core body temperatures, although slightly different, are well-correlated, with oral temperature being the lowest of the four. Oral temperatures are generally about 0.4 °C (0.4 K) lower than rectal temperatures.

Oral temperatures are influenced by drinking, chewing, smoking, and breathing with the mouth open. Cold drinks or food reduce oral temperatures; hot drinks, hot food, chewing, and smoking raise oral temperatures.

Axillary (armpit), tympanic (ear), and other skin-based temperatures correlate relatively poorly with core body temperature. Tympanic measurements run higher than rectal and core body measurements, and axillary temperatures run lower. The body uses the skin as a tool to increase or decrease core body temperature, which affects the temperature of the skin. Skin-based temperatures are more variable than other measurement sites. The peak daily temperature for axillary measurements lags about three hours behind the rest of the body. Skin temperatures are also more influenced by outside factors, such as clothing and air temperature.

Specific temperature concepts

There are some specific temperature concepts when the body temperature is abnormal: fever, hyperpyrexia, hyperthermia, and hypothermia.

Fever:

A temperature setpoint is the level at which the body attempts to maintain its temperature. When the setpoint is raised, the result is a fever. Most fevers are caused by infectious disease and can be lowered, if desired, with antipyretic medications.

An early morning temperature higher than 37.2 °C (> 310.35 K) or a late afternoon temperature higher than 37.7 °C (>310.85 K) is normally considered a fever, assuming that the temperature is elevated due to a change in the hypothalamus's setpoint. Lower thresholds are sometimes appropriate for elderly people. The normal daily temperature variation is typically 0.5 °C (0.5 K), but can be greater among people recovering from a fever.

An organism at optimum temperature is considered afebrile or apyrexic, meaning "without fever". If temperature is raised, but the setpoint is not raised, then the result is hyperthermia.

Hyperpyrexia:

Hyperpyrexia is a fever with an extreme elevation of body temperature greater than or equal to 41.5 °C (314.65 K). Such a high temperature is considered a medical emergency as it may indicate a serious underlying condition or lead to significant side effects. The most common cause is an intracranial hemorrhage. Other possible causes include sepsis, Kawasaki syndrome, neuroleptic malignant syndrome, drug effects, serotonin syndrome, and thyroid storm. Infections are the most common cause of fevers, however as the temperature rises other causes become more common. Infections commonly associated with hyperpyrexia include roseola, rubeola and enteroviral infections. Immediate aggressive cooling to less than 38.9 °C (312.05 K) has been found to improve survival. Hyperpyrexia differs from hyperthermia in that in hyperpyrexia the body's temperature regulation mechanism sets the body temperature above the normal temperature, then generates heat to achieve this temperature, while in hyperthermia the body temperature rises above its set point due to an outside source.

Hyperthermia:

Hyperthermia is an example of a high temperature that is not a fever. It occurs from a number of causes including heatstroke, neuroleptic malignant syndrome, malignant hyperthermia, stimulants such as amphetamines and cocaine, idiosyncratic drug reactions, and serotonin syndrome.

Hypothermia:

In hypothermia, body temperature drops below that required for normal metabolism and bodily functions. In humans, this is usually due to excessive exposure to cold air or water, but it can be deliberately induced as a medical treatment. Symptoms usually appear when the body's core temperature drops by 1~2 °C (1~2 K) below normal temperature.

Here is the temperature classification of human

  • Hypothermia:  <35.0 °C (308.15 K)
  • Normal: 36.5~37.5 °C (309.65~310.65 K)
  • Fever: >37.5 or 38.3 °C (310.65 or 311.45 K)
  • Hyperthermia: >37.5 or 38.3 °C (310.65 or 311.45 K)
  • Hyperpyrexia: >40.0 or 41.5 °C (313.15 or 314.65 K)

Note: The difference between fever and hyperthermia is the underlying mechanism.
Different sources have different cuts offs for fever,  hyperthermia and hyperpyrexia.

In this post we learned a lot. not only do we knew about the human body temperature, but also we learned about how can we use SI temperature units - K and °C correctly. After reading the post, comment your highest fever temperature in °C and K please. Thank you!

The origin of SI units - K and °C



In the ancient times people know hot, cold and warm by their feelings. But sometimes feeling is not reliable. So people need a quantity to describe it, which is called temperature. There are two units of temperature in SI: the base unit kelvin (K) and the derived unit degree Celsius (°C).

To measure the temperature, people use an instrument called thermometer. Then people use different special points to define different  temperature scales  to measure the temperature. These temperature scales are called "empirical scales".  In the past Celsius was an empirical scale. In 1742, Swedish astronomer Anders Celsius (1701-1744) created a new temperature scale by water: 0 represented the boiling point of water, while 100 represented the freezing point of water. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that the melting point of ice is essentially unaffected by pressure. He also determined with remarkable precision how the boiling point of water varied as a function of atmospheric pressure. He proposed that the zero point of his temperature scale, being the boiling point, would be calibrated at the mean barometric pressure at mean sea level. This pressure is known as one standard atmosphere. The BIPM's 10th General Conference on Weights and Measures (CGPM) later defined one standard atmosphere to equal precisely 101.325 kPa.

In 1743, the Lyonnais physicist Jean-Pierre Christin, permanent secretary of the Académie des sciences, belles-lettres et arts de Lyon, working independently of Celsius, developed a scale where zero represented the freezing point of water and 100 represented the boiling point of water. On 19 May 1743 he published the design of a mercury thermometer, the "Thermometer of Lyon" built by the craftsman Pierre Casati that used this scale.

In 1744, coincident with the death of Anders Celsius, the Swedish botanist Carolus Linnaeus (1707-1778) reversed Celsius's scale. His custom-made "linnaeus-thermometer", for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time and whose workshop was located in the basement of the Stockholm observatory. As often happened in this age before modern communications, numerous physicists, scientists, and instrument makers are credited with having independently developed this same scale; among them were Pehr Elvius, the secretary of the Royal Swedish Academy of Sciences (which had an instrument workshop) and with whom Linnaeus had been corresponding; Daniel Ekström, the instrument maker; and Mårten Strömer (1707-1770) who had studied astronomy under Anders Celsius.

Since the 19th century, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the centigrade scale were often reported simply as degrees or °, when greater specificity was desired, as degrees centigrade. The symbol for temperature values on this scale is °C.

Because the term centigrade was also the Spanish and French language name for a unit of angular measurement (1/10000 of a right angle) and had a similar connotation in other languages, the term centesimal degree was used when very precise, unambiguous language was required by international standards bodies such as the BIPM. The 9th CGPM and the CIPM formally adopted "degree Celsius" (symbol: °C) in 1948.

Now  the Celsius scale was being used by everyone in the world. But the Celsius scale is an empirical scale depending on the properties of a particular material (water), so it does not signify the essence of temperature. With the development of science, thermodynamic temperature was founded.

Thermodynamic temperature is defined by the third law of thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, called absolute zero, the particle constituents of matter have minimal motion and can become no colder. In the quantum-mechanical description, matter at absolute zero is in its ground state, which is its state of lowest energy. Thermodynamic temperature is often also called absolute temperature, for two reasons: one that it does not depend on the properties of a particular material; two that it refers to an absolute zero according to the properties of the ideal gas.

In 1848 Lord Kelvin (William Thomson), wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby "infinite cold" (absolute zero) was the scale's null point, and which used the degree Celsius for its unit increment. Thomson calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the Kelvin thermodynamic temperature scale. Thomson's value of "-273" was the negative reciprocal of 0.00366—the accepted expansion coefficient of gas per degree Celsius relative to the ice point, giving a remarkable consistency to the currently accepted value.

In 1954, the Resolution 3 of the 10th CGPM gave the Kelvin scale its modern definition by designating the triple point of water as its second defining point and assigned its temperature to exactly 273.16 °K.

In 1967/1968 Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature "kelvin", symbol K, replacing "degree Kelvin", symbol °K. Furthermore, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also held in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is equal to the fraction 1/273.16 of the thermodynamic temperature of the triple point of water."

After redefining kelvin, degree Celsius was redefined as well. °C is derived by K: °C = K - 273.15. In this definition, the scale of K is the same as °C. So when the temperature increases/decrease 1 K, it will increase /decrease 1°C.

Unlike °C, the kelvin is not referred to or typeset as a degree. The kelvin is the primary unit of measurement in the physical sciences, but is often used in conjunction with the degree Celsius, which has the same magnitude. Subtracting 273.16 K from the temperature of the triple point of water (0.01 °C) makes absolute zero (0 K) equivalent to −273.15 °C. The boiling point of water in 101.325 kPa is 373.15 K (100 °C).

K is used mainly in research and experiment, sometimes in engineering. While in our daily life we often use the derived unit °C. °C is commonly used in weather report, engineering, cooking, medical treatment, etc. Here are the rules of using these 2 units.

Kelvin is named after William Thomson, 1st Baron Kelvin. As with every SI unit whose name is derived from the proper name of a person, the first letter of its symbol is upper case (K). However, when an SI unit is spelled out in English, it should always begin with a lower case letter (kelvin), except in a situation where any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case. Note that "degree Celsius" conforms to this rule because the "d" is lowercase.  When spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm (e.g. "the triple point of water is exactly 273.16 kelvins"). When reference is made to the "Kelvin scale", the word "kelvin"—which is normally a noun—functions adjectivally to modify the noun "scale" and is capitalized. As with most other SI unit symbols (angle symbols, e.g. 45°3'4'', are the exception) there is a space between the numeric value and the kelvin symbol (e.g. "99.987 K").

Before the 13th General Conference on Weights and Measures (CGPM) in 1967–1968, the unit kelvin was called a "degree", the same as with the other temperature scales at the time. It was distinguished from the other scales with either the adjective suffix "Kelvin" ("degree Kelvin") or with "absolute" ("degree absolute") and its symbol was °K. The latter (degree absolute), which was the unit's official name from 1948 until 1954, was rather ambiguous since it could also be interpreted as referring to the old Rankine scale. Before the 13th CGPM, the plural form was "degrees absolute". The 13th CGPM changed the unit name to simply "kelvin" (symbol K).  The omission of "degree" indicates that it is not relative to an arbitrary reference point like °C, but rather an absolute unit of measure which can be manipulated algebraically (e.g. multiplied by two to indicate twice the amount of "mean energy" available among elementary degrees of freedom of the system).

The "degree Celsius" has been the only SI unit whose full unit name contains an uppercase letter since the SI base unit for temperature, the kelvin, became the proper name in 1967 replacing the term degree Kelvin. The plural form is degrees Celsius.

The general rule of the BIPM is that the numerical value always precedes the unit, and a space is always used to separate the unit from the number, e.g. "30.2 °C" (not "30.2°C" or "30.2° C"). Thus the value of the quantity is the product of the number and the unit, the space being regarded as a multiplication sign (just as a space between units implies multiplication). The only exceptions to this rule are for the unit symbols for degree, minute, and second for plane angle (°, ', and '', respectively), for which no space is left between the numerical value and the unit symbol. Other languages, and various publishing houses, may follow different typographical rules.

In this post we talked about the origin of SI temperature units - K and °C. In the next post we will use these 2 units in an example. This is a new instance!

2015-04-28

The origin of SI units - Second

The second (symbol: s) is the base unit of time in SI; it is the second division of the hour by sixty, the first division by 60 being the minute. Between 1000 CE (when al-Biruni used seconds) and 1960 the second was defined as 1/86,400 of a mean solar day (that definition still applies in some astronomical and legal contexts).Between 1960 and 1967, it was defined in terms of the period of the Earth's orbit around the Sun in 1900, but it is now defined more precisely in atomic terms. Seconds may be measured using mechanical, electric or atomic clocks.

Before mechanical clocks

The Egyptians subdivided daytime and nighttime into twelve hours each since at least 2000 BC, hence the seasonal variation of their hours. The Hellenistic astronomers Hipparchus (c. 150 BC) and Ptolemy (c. AD 150) subdivided the day sexagesimally and also used a mean hour (1⁄24 day), simple fractions of an hour (1⁄4, 2⁄3, etc.) and time-degrees (1⁄360 day or four modern minutes), but not modern minutes or seconds.

The day was subdivided sexagesimally, that is by 1⁄60, by 1⁄60 of that, by 1⁄60 of that, etc., to at least six places after the sexagesimal point (a precision of better than 2 microseconds) by the Babylonians after 300 BC. For example, six fractional sexagesimal places of a day was used in their specification of the length of the year, although they were unable to measure such a small fraction of a day in real time. As another example, they specified that the mean synodic month was 29;31,50,8,20 days (four fractional sexagesimal positions), which was repeated by Hipparchus and Ptolemy sexagesimally, and is currently the mean synodic month of the Hebrew calendar, though restated as 29 days 12 hours 793 halakim (where 1 hour = 1080 halakim).[10] The Babylonians did not use the hour, but did use a double-hour lasting 120 modern minutes, a time-degree lasting four modern minutes, and a barleycorn lasting 3 1⁄3 modern seconds (the helek of the modern Hebrew calendar), but did not sexagesimally subdivide these smaller units of time. No sexagesimal unit of the day was ever used as an independent unit of time.

 In 1000, the Persian scholar al-Biruni gave the times of the new moons of specific weeks as a number of days, hours, minutes, seconds, thirds, and fourths after noon Sunday. In 1267, the medieval scientist Roger Bacon stated the times of full moons as a number of hours, minutes, seconds, thirds, and fourths (horae, minuta, secunda, tertia, and quarta) after noon on specified calendar dates. Although a third for 1⁄60 of a second remains in some languages, for example Polish (tercja) and Turkish (salise), the modern second is subdivided decimally.

Seconds measured by mechanical clocks

The earliest clocks to display seconds appeared during the last half of the 16th century. The earliest spring-driven timepiece with a second hand which marked seconds is an unsigned clock depicting Orpheus in the Fremersdorf collection, dated between 1560 and 1570. During the 3rd quarter of the 16th century, Taqi al-Din built a clock with marks every 1/5 minute. In 1579, Jost Bürgi built a clock for William of Hesse that marked seconds.  In 1581, Tycho Brahe redesigned clocks that displayed minutes at his observatory so they also displayed seconds. However, they were not yet accurate enough for seconds. In 1587, Tycho complained that his four clocks disagreed by plus or minus four seconds.

The second first became accurately measurable with the development of pendulum clocks keeping mean time (as opposed to the apparent time displayed by sundials). In 1644, Marin Mersenne calculated that a pendulum with a length of 1 m would have a period at one standard gravity of precisely two seconds, one second for a swing forward and one second for the return swing, enabling such a pendulum to tick in precise seconds.

In 1670, London clockmaker William Clement added this seconds pendulum to the original pendulum clock of Christiaan Huygens. From 1670 to 1680, Clement made many improvements to his clock and introduced the longcase or grandfather clock to the public. This clock used an anchor escapement mechanism with a seconds pendulum to display seconds in a small subdial. This mechanism required less power, caused less friction and was accurate enough to measure seconds reliably as one-sixtieth of a minute than the older verge escapement. Within a few years, most British precision clockmakers were producing longcase clocks and other clockmakers soon followed. Thus the second could now be reliably measured.

International second

Astronomical observations of the 19th and 20th centuries revealed that the mean solar day is slowly but measurably lengthening and the length of a tropical year is not entirely predictable either; thus the sun–earth motion is no longer considered a suitable basis for definition. With the advent of atomic clocks, it became feasible to define the second based on fundamental properties of nature.
Under the SI (via the CIPM), since 1967 the second has been defined as the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. In 1997 CIPM added that the periods would be defined for a caesium atom at rest, and approaching the theoretical temperature of absolute zero (0 K), and in 1999, it included corrections from ambient radiation. Absolute zero implies no movement, and therefore zero external radiation effects (i.e., zero local electric and magnetic fields).

SI multiples

SI prefixes are frequently combined with the word second to denote subdivisions of the second, e.g., the millisecond (one thousandth of a second), the microsecond (one millionth of a second), and the nanosecond (one billionth of a second). Though SI prefixes may also be used to form multiples of the second such as kilosecond (one thousand seconds), such units are rarely used in practice. The more common larger non-SI units of time are not formed by powers of ten; instead, the second is multiplied by 60 to form a minute, which is multiplied by 60 to form an hour, which is multiplied by 24 to form a day.

In this post we know about the origin and the history of the SI time unit - s. In next post we will introduce the origin of SI temperature unit - K and °C.

2015-04-24

The origin of SI units - Kilogram

The kilogram or kilogramme (SI unit symbol: kg), is the base unit of mass in SI and is defined as being equal to the mass of the International Prototype of the Kilogram (IPK). The gram, 1/1000th of a kilogram, was originally defined in 1795 as the mass of one cubic centimeter of water at the melting point of water. The original prototype kilogram, manufactured in 1799 and from which the IPK is derived, had a mass equal to the mass of 1 litre of water at 3.98 °C (277.13 K).

The kilogram is the only SI base unit with an SI prefix ("kilo", symbol "k") as part of its name. It is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units in the SI system are defined relative to the kilogram so its stability is important.

The International Prototype Kilogram was commissioned CGPM under the authority of the Metre Convention (1875), and is in the custody of BIPM who hold it on behalf of the CGPM. After the International Prototype Kilogram had been found to vary in mass over time, CIPM recommended in 2005 that the kilogram be redefined in terms of a fundamental constant of nature. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant. The decision was originally deferred until 2014; in 2014 it was deferred again until the next meeting. The International Prototype Kilogram (IPK) is rarely used or handled. Copies of the IPK kept by national metrology laboratories around the world were compared with the IPK in 1889, 1948, and 1989 to provide traceability of measurements of mass anywhere in the world back to the IPK.

The word kilogramme or kilogram is derived from the French kilogramme, which itself was a learned coinage, prefixing the Greek stem of χίλιοι (khilioi) "a thousand" to gramma, a Late Latin term for "a small weight", itself from Greek γράμμα. The word kilogramme was written into French law in 1795, in the Decree of 18 Germinal, which revised the older system of units introduced by the French National Convention in 1793, where the gravet had been defined as weight (poids) of a cubic centimetre of water, equal to 1/1000th of a grave. In the decree of 1795, the term gramme thus replaced gravet, and kilogramme replaced grave.

The French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, Then the spelling kilogram was used as well. Now in the English language both spellings are used, with "kilogram" having become by far the more common.

In the 1800s, kg was defined by water. From 1889, kg is defined by IPK. But the mass of IPK gains very little in years, so in the future kg will have a new definition, after which IPK will never be used.

Kilogramme des Archives

On April 7, 1795, the gram was decreed in France to be "the absolute weight of a volume of pure water equal to the cube of the hundredth part of the metre, and at the temperature of melting ice." The concept of using a unit volume of water to define a unit measure of mass was proposed by the English philosopher John Wilkins in his 1668 essay as a means of linking mass and length.

Since trade and commerce typically involve items significantly more massive than one gram, and since a mass standard made of water would be inconvenient and unstable, the regulation of commerce necessitated the manufacture of a practical realization of the water-based definition of mass. Accordingly, a provisional mass standard was made as a single-piece, metallic artifact one thousand times as massive as the gram—the kilogram.

At the same time, work was commissioned to precisely determine the mass of a cubic decimeter (one litre) of water. Although the decreed definition of the kilogram specified water at 0 °C (273.15 K) - its highly stable temperature point—the French chemist Louis Lefèvre-Gineau and the Italian naturalist Giovanni Fabbroni after several years of research chose to redefine the standard in 1799 to water’s most stable density point: the temperature at which water reaches maximum density, which was measured at the time as 3.98 °C (277.13 K). They concluded that one cubic decimeter of water at its maximum density was equal to 99.9265% of the target mass of the provisional kilogram standard made four years earlier. That same year, 1799, an all-platinum kilogram prototype was fabricated with the objective that it would equal, as close as was scientifically feasible for the day, the mass of one cubic decimeter of water at 3.98 °C (277.13 K). The prototype was presented to the Archives of the Republic in June and on December 10, 1799, the prototype was formally ratified as the kilogramme des Archives (Kilogram of the Archives) and the kilogram was defined as being equal to its mass. This standard stood for the next 90 years.

International prototype kilogram

Since 1889 the magnitude of the kilogram has been defined as the mass of an object called the international prototype kilogram, often referred to in the professional metrology world as the "IPK". The IPK is made of a platinum alloy known as "Pt‑10Ir", which is 90% platinum and 10% iridium (by mass) and is machined into a right-circular cylinder (height = diameter) of 39.17 mm to minimize its surface area. The addition of 10% iridium improved upon the all-platinum Kilogram of the Archives by greatly increasing hardness while still retaining platinum’s many virtues: extreme resistance to oxidation, extremely high density (almost twice as dense as lead and more than 21 times as dense as water), satisfactory electrical and thermal conductivities, and low magnetic susceptibility. The IPK and its six sister copies are stored at the BIPM in an environmentally monitored safe in the lower vault located in the basement of the BIPM’s Pavillon de Breteuil in Sèvres on the outskirts of Paris (see External images, below, for photographs). Three independently controlled keys are required to open the vault. Official copies of the IPK were made available to other nations to serve as their national standards. These are compared to the IPK roughly every 40 years, thereby providing traceability of local measurements back to the IPK.

The Metre Convention was signed on May 20, 1875 and further formalized the metric system (a predecessor to the SI), quickly leading to the production of the IPK. The IPK is one of three cylinders made in 1879 by Johnson Matthey, which continues to manufacture nearly all of the national prototypes today. In 1883, the mass of the IPK was found to be indistinguishable from that of the Kilogramme des Archives made eighty-four years prior, and was formally ratified as the kilogram by the 1st CGPM in 1889.

Proposed future definitions

As of 2014 the kilogram was the only SI unit still defined by an artifact. In 1960 the metre, having previously also been defined by reference to an artifact (a single platinum-iridium bar with two marks on it) was redefined in terms of invariant, fundamental physical constants (the wavelength of a particular emission of light emitted by krypton, and later the speed of light) so that the standard can be reproduced in different laboratories by following a written specification. At the 94th Meeting of the International Committee for Weights and Measures (2005) it was recommended that the same be done with the kilogram.

In October 2010, CIPM voted to submit a resolution for consideration at the CGPM, to "take note of an intention" that the kilogram be defined in terms of the Planck constant, h (which has dimensions of energy times time) together with other fundamental units. This resolution was accepted by the 24th conference of the CGPM in October 2011 and in addition the date of the 25th conference was moved forward from 2015 to 2014. Such a definition would theoretically permit any apparatus that was capable of delineating the kilogram in terms of the Planck constant to be used as long as it possessed sufficient precision, accuracy and stability. The watt balance (discussed below) may be able to do this.

There are also some alternative approaches to redefining the kilogram that were fundamentally different from the watt balance were explored to varying degrees with some abandoned, as follows:

  • Atom-counting approaches: Carbon-12, Avogadro project, Ion accumulation
  • Ampere-based force

In this post we talked about the origin and the definition of the SI mass unit - kg. In the next post we will talk about the time unit - s.

2015-04-23

The origin of SI units - Metre

In the following posts we will let us know the origin of some SI units. In this post we will talk about the origin of the length unit in SI - metre.

Before 1789 there are various types of the units of measurement, especially in the Europe, which made inconvenience among countries. So people wanted to create a kind of universal measure that can be used internationally. In the aftermath of the French Revolution (1789), the old units of measure that were associated with the ancien régime were replaced by new units. The livre was replaced by the decimal franc, and a new unit of length was introduced which became known as the metre. Although there was initially considerable resistance to the adoption of the new decimal system in France (including an official reversion to the mesures usuelles ["normal units"] for a period), the metre gained following in continental Europe during the mid nineteenth century, particularly in scientific usage, and was officially adopted as an international measurement unit by the Metre Convention of 1875. Now metre is a base unit in SI, which is used all over the world.

Metre is defined originally my the meridian, and now it is defined by light speed in vacuum.

Meridional definition

In 1668, John Wilkins,an English cleric and philosopher, proposed using Christopher Wren's suggestion of a pendulum with a half-period of one second to measure a standard length that Christiaan Huygens had observed to be 38 Rijnland inches or 39¼ English inches (997 mm) in length. In the 18th century, there were two favoured approaches to the definition of the standard unit of length. One approach followed Wilkins in defining the metre as the length of a pendulum with a half-period of one second, a 'seconds pendulum'. The other approach suggested defining the metre as one ten-millionth of the length of the Earth's meridian along a quadrant; that is, the distance from the Equator to the North Pole. In 1791, the French Academy of Sciences selected the meridional definition over the pendular definition because the force of gravity varies slightly over the surface of the Earth, which affects the period of a pendulum.

To establish a universally accepted foundation for the definition of the metre, more accurate measurements of this meridian would have to be made. The French Academy of Sciences commissioned an expedition led by Jean Baptiste Joseph Delambre and Pierre Méchain, lasting from 1792 to 1799, which measured the distance between a belfry in Dunkerque and Montjuïc castle in Barcelona to estimate the length of the meridian arc through Dunkerque. This portion of the meridian, assumed to be the same length as the Paris meridian, was to serve as the basis for the length of the quarter meridian connecting the North Pole with the Equator.

The exact shape of the Earth is not a simple mathematical shape (sphere or oblate spheroid) at the level of precision required for defining a standard of length. The irregular and particular shape of the Earth (smoothed to sea level) is called a geoid, which means "Earth-shaped". Despite this fact, and based on provisional results from the expedition, France adopted the metre as its official unit of length in 1793. Although it was later determined that the first prototype metre bar was short by a fifth of a millimetre because of miscalculation of the flattening of the Earth, this length became the standard. The circumference of the Earth through the poles is therefore slightly more than forty million metres (40007863 m).

Prototype metre bar

In the 1870s and in light of modern precision, a series of international conferences was held to devise new metric standards. The Metre Convention (Convention du Mètre) of 1875 mandated the establishment of a permanent International Bureau of Weights and Measures (BIPM: Bureau International des Poids et Mesures) to be located in Sèvres, France. This new organisation would preserve the new prototype metre and kilogram standards when constructed, distribute national metric prototypes, and maintain comparisons between them and non-metric measurement standards. The organisation created a new prototype bar in 1889 at the first General Conference on Weights and Measures (CGPM: Conférence Générale des Poids et Mesures), establishing the International Prototype Metre as the distance between two lines on a standard bar composed of an alloy of ninety percent platinum and ten percent iridium, measured at the melting point of ice.

The original international prototype of the metre is still kept at the BIPM under the conditions specified in 1889. A discussion of measurements of a standard metre bar and the errors encountered in making the measurements is found in a NIST document.

Standard wavelength of krypton-86 emission

In 1893, the standard metre was first measured with an interferometer by Albert A. Michelson, the inventor of the device and an advocate of using some particular wavelength of light as a standard of length. By 1925, interferometry was in regular use at the BIPM. However, the International Prototype Metre remained the standard until 1960, when the eleventh CGPM defined the metre in the new International System of Units (SI) as equal to 1650763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum.

Speed of light

To further reduce uncertainty, the 17th CGPM in 1983 replaced the definition of the metre with its current definition, thus fixing the length of the metre in terms of the second and the speed of light:
The metre is the length of the path travelled by light in vacuum during a time interval of 1/299792458 of a second.
This definition fixed the speed of light in vacuum at exactly 299792458 m/s. An intended by-product of the 17th CGPM's definition was that it enabled scientists to compare their lasers accurately using frequency, resulting in wavelengths with one-fifth the uncertainty involved in the direct comparison of wavelengths, because interferometer errors were eliminated. To further facilitate reproducibility from lab to lab, the 17th CGPM also made the iodine-stabilised helium-neon laser "a recommended radiation" for realising the metre. For the purpose of delineating the metre, the BIPM currently considers the HeNe laser wavelength, λHeNe, to be 632.99121258 nm with an estimated relative standard uncertainty (U) of 2.1E−11. This uncertainty is currently one limiting factor in laboratory realisations of the metre, and it is several orders of magnitude poorer than that of the second, based upon the caesium fountain atomic clock (U = 5E-16). Consequently, a realisation of the metre is usually delineated (not defined) today in labs as 1579800.762042(33) wavelengths of helium-neon laser light in a vacuum, the error stated being only that of frequency determination. This bracket notation expressing the error is explained in the article on measurement uncertainty.

Practical realisation of the metre is subject to uncertainties in characterising the medium, to various uncertainties of interferometry, and to uncertainties in measuring the frequency of the source. A commonly used medium is air, and the National Institute of Standards and Technology has set up an online calculator to convert wavelengths in vacuum to wavelengths in air. As described by NIST, in air, the uncertainties in characterising the medium are dominated by errors in measuring temperature and pressure. Errors in the theoretical formulas used are secondary. By implementing a refractive index correction such as this, an approximate realisation of the metre can be implemented in air, for example, using the formulation of the metre as 1579800.762042(33) wavelengths of helium-neon laser light in vacuum, and converting the wavelengths in a vacuum to wavelengths in air. Of course, air is only one possible medium to use in a realisation of the metre, and any partial vacuum can be used, or some inert atmosphere like helium gas, provided the appropriate corrections for refractive index are implemented.

Timeline of definition


  • 8 May 1790 - The French National Assembly decides that the length of the new metre would be equal to the length of a pendulum with a half-period of one second.
  • 30 March 1791 - The French National Assembly accepts the proposal by the French Academy of Sciences that the new definition for the metre be equal to one ten-millionth of the length of the Earth's meridian along a quadrant through Paris, that is the distance from the equator to the north pole.
  • 1795 - Provisional metre bar constructed of brass. Based on Bessel's ellipsoid and legally equal to 443.44 lines on the toise du Pérou (a standard French unit of length from 1747).
  • 10 December 1799 - The French National Assembly specifies the platinum metre bar, constructed on 23 June 1799 and deposited in the National Archives, as the final standard. Legally equal to 443.296 lines on the toise du Pérou.
  • 28 September 1889 - The 1st General Conference on Weights and Measures (CGPM) defines the metre as the distance between two lines on a standard bar of an alloy of platinum with 10% iridium, measured at the melting point of ice.
  • 6 October 1927 - The 7th CGPM redefines the metre as the distance, at 0 °C (273.15 K), between the axes of the two central lines marked on the prototype bar of platinum-iridium, this bar being subject to one standard atmosphere of pressure and supported on two cylinders of at least 1 cm diameter, symmetrically placed in the same horizontal plane at a distance of 571 mm from each other.
  • 14 October 1960 - The 11th CGPM defines the metre as 1650763.73 wavelengths in a vacuum of the radiation corresponding to the transition between the 2p10 and 5d5 quantum levels of the krypton-86 atom.
  • 21 October 1983 - The 17th CGPM defines the metre as the length of the path travelled by light in a vacuum during a time interval of 1/299792458 of a second.
  • 2002 - The International Committee for Weights and Measures (CIPM) considers the metre to be a unit of proper length and thus recommends this definition be restricted to "lengths l which are sufficiently short for the effects predicted by general relativity to be negligible with respect to the uncertainties of realization".

Now metre is one of the base unit in SI, which is used by everyone in the world. In the next post we will talk about the origin of the SI mass unit - kg.

2015-04-20

Instance of using SI - International standard paper size

In our life we will often hear "A4, B3, C4…" papers. But what are these? These are different sizes of paper! But what is the size of them? And why it is like this?  In this post we will introduce.

It is known to all that paper plays an important role of human. In the past different sizes and names were used in different parts of the world, which caused the inconvenience among different countries. So people needed a new standard of paper size which can be used all over the world.

The new standard is based on the square root of 2 (1.414…). The advantages of basing a paper size upon an aspect ratio of 1.414 were already noted in 1786 by the German scientist Georg Christoph Lichtenberg, in a letter to Johann Beckmann. The formats that became A2, A3, B3, B4 and B5 were developed in France, and published in 1798 during the French Revolution.

Early in the twentieth century, Dr Walter Porstmann turned Lichtenberg's idea into a proper system of different paper sizes. Porstmann's system was introduced as a DIN standard (DIN 476) in Germany in 1922, replacing a vast variety of other paper formats. Even today the paper sizes are called "DIN Ax" in everyday use in Germany, Austria, Spain and Portugal.

The main advantage of this system is its scaling: if a sheet with an aspect ratio of  1.414 is divided into two equal halves parallel to its shortest sides, then the halves will again have an aspect ratio of 1.414.  Folded brochures of any size can be made by using sheets of the next larger size, e.g. A4 sheets are folded to make A5 brochures. The system allows scaling without compromising the aspect ratio from one size to another – as provided by office photocopiers, e.g. enlarging A4 to A3 or reducing A3 to A4. Similarly, two sheets of A4 can be scaled down to fit exactly one A4 sheet without any cutoff or margins.

The weight of each sheet is also easy to calculate given the basis weight in grams per square metre (g/m² or "gsm"). Since an A0 sheet has an area of 1 m², its weight in grams is the same as its basis weight in g/m2. A standard A4 sheet made from 80 g/m² paper weighs 5 g, as it is one 16th (four halvings) of an A0 page. Thus the weight, and the associated postage rate, can be easily calculated by counting the number of sheets used.

ISO 216 and its related standards were first published between 1975 and 1995:

  • ISO 216:2007, defining the A and B series of paper sizes
  • ISO 269:1985, defining the C series for envelopes
  • ISO 217:2013, defining the RA and SRA series of raw ("untrimmed") paper sizes
  • Now we use 3 series of paper size: A, B and C.


A series:

Paper in the A series format has a 1:1.414 aspect ratio, although this is rounded to the nearest millimetre. A0 is defined so that it has an area of 1 m², prior to the rounding. So the A0 paper size is 841 mm × 1189 mm. Successive paper sizes in the series (A1, A2, A3, etc.) are defined by halving the preceding paper size, cutting parallel to its shorter side so that the long side of A(n+1) is the same length as the short side of An prior to rounding. The most frequently used of this series is the size A4 which is 210 mm × 297 mm.

The significant advantage of this system is its scaling: if a sheet with an aspect ratio of 1.414 is divided into two equal halves parallel to its shortest sides, then the halves will again have an aspect ratio of 1.414. Folded brochures of any size can be made by using sheets of the next larger size, e.g. A4 sheets are folded to make A5 brochures. The system allows scaling without compromising the aspect ratio from one size to another—as provided by office photocopiers, e.g. enlarging A4 to A3 or reducing A3 to A4. Similarly, two sheets of A4 can be scaled down and fit exactly 1 sheet without any cutoff or margins.

The behaviour of the aspect ratio is easily proven: on a sheet of paper, let a be the long side and b be the short side; thus, a/b=1.414. When the sheet of paper is folded in half widthwise, let c be the length of the new short side: c=a/2. If we take the ratio of the newly folded paper we have:
b/c=b/(a/2)=2/(a/b)=1.414
Therefore, the aspect ratio is preserved for the new dimensions of the folded paper.

B series:

In addition to the A series, there is a less common B series. The area of B series sheets is the geometric mean of successive A series sheets. So, B1 is between A0 and A1 in size, with an area of 0.707 m². As a result, B0 is 1 m wide, and other sizes in the B series are a half, a quarter or further fractions of a metre wide. While less common in office use, it is used for a variety of special situations. Many posters use B-series paper or a close approximation, such as 50 cm × 70 cm; B5 is a relatively common choice for books. The B series is also used for envelopes and passports. The B-series is widely used in the printing industry to describe both paper sizes and printing press sizes, including digital presses. B3 paper is used to print two A4 pages side by side using imposition; four pages would be printed on B2, eight on B1, etc.

C series:

The C series is used only for envelopes and is defined in ISO 269. The area of C series sheets is the geometric mean of the areas of the A and B series sheets of the same number; for instance, the area of a C4 sheet is the geometric mean of the areas of an A4 sheet and a B4 sheet. This means that C4 is slightly larger than A4, and slightly smaller than B4. The practical usage of this is that a letter written on A4 paper fits inside a C4 envelope, and C4 paper fits inside a B4 envelope.
Here is all the ISO/DIN paper sizes in SI units:

The tolerances specified in the standard are

  • ±1.5 mm for dimensions up to 150 mm,
  • ±2 mm for lengths in the range 150 to 600 mm and
  • ±3 mm for any dimension above 600 mm.

By 1975 so many countries were using the German system that it was established as an ISO standard, as well as the official United Nations document format. By 1977, A4 was the standard letter format in 88 of 148 countries. Today the standard has been adopted by all countries in the world. ISO paper sizes affect writing paper, stationery, cards, and some printed documents all over the world.


2015-04-16

Instance of using SI - Let us make delicious chocolate chip cookies!



SI units can be seen everywhere, and the recipe is not an exception. Now let us make delicious chocolate chip cookies in SI units! Follow me!

Chocolate Chip Cookies

Ingredients:

  • 550 ml    unsifted flour 
  •      5 ml    baking soda
  •     5 ml    salt 
  • 250 ml    butter or margarine, softened
  • 175 ml    granulated sugar
  • 175 ml    firmly packed brown sugar
  •       5 ml    vanilla extract
  •          2    eggs
  •          2    168 g packages semisweet chocolate chips
  • 250 ml    chopped nuts

How to make:


  1. Preheat the oven to 190 °C.
  2. Combine flour, baking soda and salt in small bowl, then set aside.
  3. Combine sugar, brown sugar and vanilla in large bowl and beat in eggs.
  4. 4Gradually add flour mixture in large bowl and mix well, then stir in chocolate chips and nuts.
  5. Using 5 ml measure, drop by round measures onto ungreased cookie sheet.
  6. Bake 8~10 min, then  you will make 100  5 cm cookies.

Note 1: Liquid and dry measure equivalencies

Cooking
SI
1/4 teaspoon
1.25 ml
1/2 teaspoon
2.5 ml
1 teaspoon
5 ml
1 tablespoon
15 ml
1/8 cup
30 ml
1/4 cup
60 ml
1/3 cup
80 ml
1/2 cup
125 ml
1 cup
250 ml


Note 2: Oven Temperature equivalencies

Description
K
°C
Cool
363
90
Very slow
393
120
Slow
423~433
150~160
Moderately slow
433~453
160~180
Moderate
453~463
180~190
Moderately hot
463~473
190~200
Hot
473~503
200~230
Very hot
503~533
230~260