A couple of weeks ago I found a Twitter post saying that long division is useless in practice and that got me thinking: is it really? Because computers do it, and the way I (and probably you) intuitively divide things, is basically variations on this method.
As a refresher, the way that long division works for positive integers is as follows: Given the dividend A and the divisor D and our radix R (which can be 10, 2, or something else), we recursively perform the following operations:
0. Determine the largest power of R, denote that R^x, such that R^x * D is at most A. (In computers the value of x is usually a predetermined upper bound). Then, iterating until x drops below zero, do the following:
1. Determine a digit of the quotient as the largest coefficient C such that C * R^x * D is at most A. If R^x * D is larger than A the digit is zero. This digit C is appended to the quotient (i.e. Initialize Q = 0, in each step Q := Q * R + C).
2. Subtract your power product (C * R^x * D) from A, and set the value of A to this new value.
3. If x > 0, Decrease the value of x by one (so R^x is now scaled down), and go back to step 1. Otherwise go to step 4.
4. The final value of A after all is said and done is the remainder. The quotient is the number consisting of the digits you appended in step 1.
That sounded a bit complicated, but it should cover all the bases (no pun intended). Now, implementing this in a program is rather trivial. You just do a FOR loop on x, another one inside (or binary search) to determine C, and a few O(1) computations to perform the bring-down and remainder step. (Edit: Note that there are many variants of this algorithm, but they should largely be equivalent. What is described here is a way of making the sequence of steps as straight forward as possible programmatically.)
And if you're working in base two, the FOR loop to determine the digit C is simply an if-else statement, since R^x * D is the only value to check. This also removes the need for any multiplications, as those are replaced with bit shifts.
This gives us the following idea (sourced from my solution to Leetcode 29 using binary LD):
int64_t quotient = 0;
for (int64_t x = 31; x >= 0; x--) {
int64_t tester = (int64_t)(divisor)<<x;
quotient <<= 1;
if (tester > dividend) continue;
dividend -= tester;
quotient++;
}
This works for 32 bit integers but easily generalizes. But now, about the intuitive stuff. How do people divide numbers in their head? There are many ways, such as grouping, chunking/partitioning, decomposition, and repeated subtraction -- but most of them are actually just variations on long division.
Take decomposition for instance. Suppose you wanted to divide 156 by 4. You know that 120 is 30 * 4, so you subtract it out to get 36, which is 9 * 4. The solution is thus 39. But this is analogous to long division: The 120 step is analogous to looking at multiples of R^x * D, in this case 10 * 4. Then, you subtract, and move to the next power, which is 1. 1 * 4 goes into 36 nine times. The final subtraction leaves zero remainder, thus 156 = 4 * 39.
Another way to think about decomposition is how I visualize things. Suppose you wanted to divide 670 by 4. You know that 400 is less than 670 but not 800. So the term representing 10^2 is 400, with coefficient 1. Subtracting, you get 270. Intuitively, 240 <= 270 < 280. So the term representing 10^1 is 240, which is the coefficient 6. Lastly, you get 30. Knowing that 28 <= 30 < 32, you find the final coefficient to be seven. And the final subtraction gives the remainder of 2. So this way you obtain 670 = 4 * 167 + 2.
Of course, there are many ways to visualize this, but these I find are the most straightforward, and these, as well as the most simple computer division schemes, all stem from the hideous abomination you know as long division.