precision statement

The precision statement defines the number of fractional decimal places that a numerical value can hold.


 precision num.constant


num.constant Number of decimal places (from 0 to 9) that a numeric value can hold.


The default precision is 4. The precision statement can be set in the range of 0 to 9. Numbers with more decimal places than the precision are truncated on D3.

Only one precision statement is allowed in a program and it must precede the use of any numeric data. Programs calling subroutines or entering other programs must have the same precision.

If the precisions do not match, the program aborts into the debugger. This restriction also applies to main programs that share data in named common space. However, violations in this case are not reported, and values of named common variables are incorrect.

  • For certain arithmetic functions, higher precision limits the magnitude of the values returned.
  • In D3, most functions have no limitation on their numeric range.
  • Precision in the range of 1 to 9 handles numbers as 48-bit scaled binary numbers.

If the result of an arithmetic expression that divides (/), multiplies (*), takes a remainder (\), or produces an exponent (^) exceeds the maximum magnitude of the 48-bit representation, the system creates an internal variable type that uses a precision of 18 digits to the right of the decimal point, and unlimited to the left. This feature assures maximum accuracy.

A precision of 0 forces 48-bit integer arithmetic for all operations. (99999/100000 results in a 0).


This statement changes this program’s arithmetic precision to 2. All subroutines called from this program must have the same precision statement.

 precision 2


 precision 6 
 print 9999999/10000000

Output Result