Before continuing on, it is worth considering some issues regarding the notion of digits to be used. Typical fixed-radix bases used in everyday life and computer science include base 10 (aka decimal), base 2 (aka binary), base 16 (aka hexadecimal) and base 1000 (we use a sort of mixed-radix base 10 and base 100 notation when naming numbers in English: the names for the numbers between 0 and 999 are somewhat idiosyncratic, but these are re-used with various suffixes for the different places separated by commas every third place or factor of one thousand).
In some sense, characters or symbols drawn from some larger alphabet are perfectly good digits (so we might use strings or lists of ASCII characters to represent numerals when programming), but in another sense it might be better to use an enumerated type which includes only symbols for the relevant digits. Of course, we can't retrict user inputs to such digits in any way that matters (syntax errors in user inputs must be accommodated and handled in some fashion), but we can use the type system and an appropriate definition of digits to ensure that values passed around within the program are constrained to be meaninful in context. In what follows we will treat the digit type abstractly but ignore issues of improper digits most of the time.
|
|