TY - JOUR
T1 - Function evaluation by incremental computation, with applications
AU - Ur, Hanoch
AU - Shen-Orr, Chaim D.
PY - 1979/8
Y1 - 1979/8
N2 - Simulation of physical systems often requires repetitive evaluation of functions such as sine, cosine, exponential etc. The arguments of these functions are physical quantities, which usually change very little from one computation cycle to the next. An approach to function evaluation is proposed, which utilizes the "slowness" property in order to reduce computation time. This approach - "incremental process" - is, in a sense, a numerical solution of a differential equation whose solution is the desired function. The main drawback of incremental methods lies in the possibility of error propagation and accumulation. This phenomenon is very noticeable when the argument oscillates around a fixed value, since the errors grow while the true solution is nearly constant ("rectification" error). It was proposed that "reversible" incremental processes may exist, which will limit error propagation, in certain situations, by regaining their (exact) initial value whenever their argument returns to its initial value. We show that such reversible processes cannot exist for transcendental functions if their argument increments may assume any value within a permissible range. Placing certain reasonable restrictions on the increment values does lead, however, to algorithms which save computer time in comparison with conventional function evaluation algorithms. Several examples are presented.
AB - Simulation of physical systems often requires repetitive evaluation of functions such as sine, cosine, exponential etc. The arguments of these functions are physical quantities, which usually change very little from one computation cycle to the next. An approach to function evaluation is proposed, which utilizes the "slowness" property in order to reduce computation time. This approach - "incremental process" - is, in a sense, a numerical solution of a differential equation whose solution is the desired function. The main drawback of incremental methods lies in the possibility of error propagation and accumulation. This phenomenon is very noticeable when the argument oscillates around a fixed value, since the errors grow while the true solution is nearly constant ("rectification" error). It was proposed that "reversible" incremental processes may exist, which will limit error propagation, in certain situations, by regaining their (exact) initial value whenever their argument returns to its initial value. We show that such reversible processes cannot exist for transcendental functions if their argument increments may assume any value within a permissible range. Placing certain reasonable restrictions on the increment values does lead, however, to algorithms which save computer time in comparison with conventional function evaluation algorithms. Several examples are presented.
UR - http://www.scopus.com/inward/record.url?scp=0018506763&partnerID=8YFLogxK
U2 - 10.1016/0378-4754(79)90129-0
DO - 10.1016/0378-4754(79)90129-0
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:0018506763
SN - 0378-4754
VL - 21
SP - 163
EP - 169
JO - Mathematics and Computers in Simulation
JF - Mathematics and Computers in Simulation
IS - 2
ER -