Tuesday, December 2, 2014

Why do we always use “x” for everything in math?

For hundreds of years, x has been the go-to symbol for the unknown quantity in mathematical equations. So who started this practice?

Algebra was born in the Middle East, during the Golden Age of medieval Islamic civilization (750 to 1258 AD), and its early form can be seen in the work of Muhammad Al-Khwarizmi and his 9th century book, Kitab al-jabr wal-muqabala (al-jabr later morphing into algebra in English). During this heyday, Muslim rule and culture had expanded onto the Iberian Peninsula, where the Moors encouraged scholarship in the sciences and math.

So what does this have to do with the letter “x” in math? In a recent TED talk, the director of The Radius Foundation, Terry Moore, posited that the the use of “x” in this way began with the inability of Spanish scholars to translate certain Arabic sounds, including the letter sheen (or shin). According to Moore, the word for “unknown thing” in Arabic is al-shalan, and it appeared many times in early mathematical works. (For example, you might see “three unknown things equals 15,” with the “unknown thing” then being 5.)

But since Spanish scholars had no corresponding sound for “sh,” they went with the “ck” sound, which in classical Greek is written with the chi symbol, X. Moore theorizes, as many others before him have done, that when this was later translated into Latin, the chi (X) was replaced with the more common Latin x.  This is similar to how Xmas, meaning Christmas, came about from the common practice of religious scholars using the Greek letter chi (X) as a shorthand for “Christ.”

The principle problem with Moore’s explanation is that there is no direct documented evidence to support it.  More speculatively, people translating the works would not care about phonetics, but the meaning of the words.  So whether they had a “sh” or not one would think would be irrelevant.  Despite the lack of direct evidence and flaws in the argument, it nonetheless remains a very popular origin theory, even among many academics. (Do a quick Google search and you’ll find many a PhD in mathematics parroting this theory.)

The 1909-1916 edition of Webster’s Dictionary, among others, also puts forth a similar theory, although stating that the Arabic word for the singular “thing,” “shei,” was translated into the Greek “xei,” and later shortened to x.  Dr. Ali Khounsary also notes that the Greek word for unknown, xenos, also begins with x, and the convention could simply have been born of an abbreviation.  But here, again, we have a lack of any direct documented evidence to support these theories.

As for a documented theory, we turn to the great philosopher and mathematician, René Descartes (1596-1650). It’s entirely possible Descartes did not come up with the practice of using “x” for an unknown, perhaps borrowing it from someone else, but at least as far as documented evidence that has survived to today goes, he seems to be the creator of the practice, as noted by the OED and the phenomenal work by Florian Cajori, A History of Mathematical Notations (1929). At the least, Descartes’ helped popularize the practice.

Specifically, in his landmark work, La Géométrie (1637), Descartes solidified the movement to symbolic notation by instituting the convention of using the lowercase letters at the beginning of the alphabet for known quantities (e.g., a, b and c) and using those at the end of the alphabet for unknown quantities (e.g., z, y and x).

Why? And why x more than y, and z for unknowns? Nobody knows.  It has been speculated that the prominence of x being used more than y and z for unknowns in this work had to do with typesetting; one story goes that it was Descartes’ printer who suggested x be the principle unknown in La Géométrie  because it was the letter least used and so the one he had more letter blocks available to use. Whether this is true or not, Descartes used the x to be an unknown at least as early as 1629 in various manuscripts, well before La Géométrie. And, indeed, it would seem he had not come to any hard rules on x, y, and z indicating unknowns; in some manuscripts from this time, he actually used x, y, and z to represent known quantities, casting even further doubt on the supposed “unknown thing” translation theories listed above.

So, in the end, by all appearances, Descartes simply arbitrarily chose the letters to represent different things in his works as was convenient and it just so happened in his landmark work,  La Géométrie, he decided the specific variable nomenclature, perhaps, on a whim.

Whatever the case, as with Descartes’ notation for powers (x3), after the publication of La Géométrie, the use of x as a principle unknown (as well as the more general tradition of a, b, c = knowns and x, y, z  = unknowns) gradually caught on.  And the rest, as they say, is mathematical history.

For more detail:http://en.wikipedia.org/wiki/X

0 comments:

Post a Comment