Quantiles are essentially points taken at regular vertical intervals from the cumulative distribution function of a random variable. Dividing ordered data into q essentially equal-sized data subsets is the motivation for q-quantiles; the quantiles are the data values marking the boundaries between consecutive subsets. Put another way, the pth q-quantile is the value x such that the probability that a random variable will be less than x is at most p/q and the probability that a random variable will be less than or equal to x is at least p/q. There are q-1 q-quantiles, with p an integer satisfying 0 < p < q.

Specially named quantiles include the percentiles (q = 100), deciles (q = 10), quintiles (q = 5), quartiles (q = 4), and the median (q = 2). There are 99 percentiles, each corresponding to a quantile represented by an integer number of percent (such as 99%). Deciles are the 10, 20, 30, 40, 50, 60, 70, 80, and 90th percentiles. Quintiles are the 20, 40, 60, and 80th percentiles. Quartiles are the 25, 50, and 75th percentiles. The median is the 50th percentile. Some software programs regard the minimum and maximum as the 0th and 100th percentile, respectively; however, such terminology is an extension beyond traditional statistics definitions. For an infinite population, the pth q-quantile is the data value where the cumulative distribution function is p/q. For a finite sample of N data points, calculate Np/q--if this is not an integer, then round up to the next integer to get the appropriate sample number (assuming samples ordered by increasing value); if it is an integer then any value from the value of that sample number to the value of the next can be taken as the quantile, and it is conventional (though arbitrary) to take the average of those two values.

More formally: the pth q-quantile of the distribution of a random variable X can be defined as the value(s) x such that:

If instead of taking p and q as integers, the p-quantile is based on a real number p with 0<p<1 then this becomes: the p-quantile of the distribution of a random variable X can be defined as the value(s) x such that:

For example, given the 10 data values {3, 6, 7, 8, 8, 10, 13, 15, 16, 20}, the first quartile is determined by 10*1/4 = 2.5, which rounds up to 3, and the third sample is 7. The second quartile value (same as the median) is determined by 10*2/4 = 5, which is an integer, so take the average of the fifth and sixth values, that is (8+10)/2 = 9, though any value from 8 through to 10 could be taken to be the median. The third quartile value is determined by 10*3/4 = 7.5, which rounds up to 8, and the eighth sample is 15. The motivation for this method is that the first quartile should divide the data between the bottom quarter and top three-quarters. Ideally, this would mean 2.5 of the samples are below the first quartile and 7.5 are above, which in turn means that the third data sample is "split in two", making the third sample part of both the first and second quarters of data, so the quartile boundary is right at that sample. (Note that the quartile is the boundary between two quarters, which are the data sets. The first quarter are those data below the first quartile, the second quarter those data between the first and second quartiles, etc. In statistics the first quarter is the lowest quarter, whereas in everyday life, such as ranking students by grade, the first quarter is often regarded as the highest quarter.

Standardized test results are commonly misinterpreted as a student scoring "in the 80th percentile", for example, as if the 80th percentile is an interval to score "in", which it is not; one can score "at" some percentile or between two percentiles, but not "in" some percentile.)

It should be noted that different software packages use slightly varying algorithms, so the answer they produce may be slightly different for any given set of data. Besides the algorithm given above, which is the proper one based on probability, there are at least four other algorithms commonly used (for various reasons, such as of ease of computation, ignorance, etc.).

If a distribution is symmetrical, then the median is the mean, but this is not generally the case.

Quantiles are useful measures because they are less susceptible to long tailed distributions and outliers. For instance, with a random variable that has an exponential distribution, any particular sample of this random variable will have roughly a 63% chance of being less than the mean. This is because the exponential distribution has a long tail for positive values, but is zero for negative numbers.

Empirically, if the data you are analyzing are not actually distributed according to your assumed distribution, or if you have other potential sources for outliers that are far removed from the mean, then quantiles may be more useful descriptive statistics than means and other moment related statistics.

Closely related is the subject of robust regression in which the sum of the absolute value of the observed errors is used in place of the squared error. The connection is that the mean is the single estimate of a distribution that minimizes expected squared error while the median minimizes expected absolute error. Robust regression shares the ability to be relatively insensitive to large deviations in outlying observations.

The quantiles of a random variable are generally preserved under increasing transformations, in the sense that for example if m is the median of a random variable X then 2m is the median of 2X, unless an arbitrary choice has been made from a range of values to specify a particular quantile. Quantiles can also be used in cases where only ordinal data is available.