For earthquakes that occurred between about 1890 (when modern seismographs came into use) and 1935 when Charles Richter developed the magnitude scale, people went back to the old records and compared the seismograms from those days with similar records for later earthquakes. For earthquakes prior to about 1890, magnitudes have been estimated by looking at the physical effects (such as amount of faulting, landslides, sandblows or river channel changes) plus the human effects (such as the area of damage or felt reports or how strongly a quake was felt) and comparing them to modern earthquakes.
Many assumptions have to be made when making these comparisons. For example, how do you compare the shaking for people living in log cabins or tents in the early 1800s with shaking for people living in high-rise steel and concrete buildings (with waterbeds!) in the 1990s? Because different researchers can get widely varying magnitudes from using different assumptions on how to make these comparisons, many of the old earthquakes have big differences in the magnitudes assigned to them. For example, magnitude estimates for the quakes that occurred near New Madrid, Missouri in 1811 and 1812 vary from the upper magnitude 6 range to as high as 8.8, all because of the choices the researchers made about how to compare the data.