Earthquake size, as measured by the Richter Scale is a well known, but not well understood, concept. The idea of a logarithmic earthquake magnitude scale was first developed by Charles Richter in the 1930's for measuring the size of earthquakes occurring in southern California using relatively high-frequency data from nearby seismograph stations. This magnitude scale was referred to as ML, with the L standing for local. This is what was to eventually become known as the Richter magnitude.
As more seismograph stations were installed around the world, it became apparent that the method developed by Richter was strictly valid only for certain frequency and distance ranges. In order to take advantage of the growing number of globally distributed seismograph stations, new magnitude scales that are an extension of Richter's original idea were developed. These include body wave magnitude (Mb) and surface wave magnitude (Ms). Each is valid for a particular frequency range and type of seismic signal. In its range of validity, each is equivalent to the Richter magnitude.
Because of the limitations of all three magnitude scales (ML, Mb, and Ms), a new more uniformly applicable extension of the magnitude scale, known as moment magnitude, or Mw, was developed. In particular, for very large earthquakes, moment magnitude gives the most reliable estimate of earthquake size.
Moment is a physical quantity proportional to the slip on the fault multiplied by the area of the fault surface that slips; it is related to the total energy released in the earthquake. The moment can be estimated from seismograms (and also from geodetic measurements). The moment is then converted into a number similar to other earthquake magnitudes by a standard formula. The result is called the moment magnitude. The moment magnitude provides an estimate of earthquake size that is valid over the complete range of magnitudes, a characteristic that was lacking in other magnitude scales.