NUMERIC.TEX 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558
  1. \documentstyle[11pt,reduce]{article}
  2. \date{}
  3. \title{NUMERIC}
  4. \author{Herbert Melenk \\
  5. Konrad--Zuse--Zentrum f\"ur Informationstechnik Berlin \\
  6. Heilbronner Strasse 10 \\
  7. D--10711 Berlin -- Wilmersdorf \\
  8. Federal Republic of Germany \\[0.05in]
  9. E--mail: melenk@sc.zib--berlin.de}
  10. \begin{document}
  11. \maketitle
  12. \index{NUMERIC package}
  13. The {\small NUMERIC} package implements some numerical (approximative)
  14. algorithms for \REDUCE, based on the \REDUCE\ rounded mode
  15. arithmetic. These algorithms are implemented for standard cases.
  16. They should not be called for ill-conditioned problems;
  17. please use standard mathematical libraries for these.
  18. \section{Syntax}
  19. \subsection{Intervals, Starting Points}
  20. Intervals are generally coded as lower bound and
  21. upper bound connected by the operator \verb+`..'+, usually
  22. associated to a variable in an
  23. equation. E.g.
  24. \begin{verbatim}
  25. x= (2.5 .. 3.5)
  26. \end{verbatim}
  27. means that the variable x is taken in the range from 2.5 up to
  28. 3.5. Note, that the bounds can be algebraic
  29. expressions, which, however, must evaluate to numeric results.
  30. In cases where an interval is returned as the result, the lower
  31. and upper bounds can be extracted by the \verb+PART+ operator
  32. as the first and second part respectively.
  33. A starting point is specified by an equation with a numeric
  34. righthand side, e.g.
  35. \begin{verbatim}
  36. x=3.0
  37. \end{verbatim}
  38. If for multivariate applications several coordinates must be
  39. specified by intervals or as a starting point, these
  40. specifications can be collected in one parameter (which is then
  41. a list) or they can be given as separate parameters
  42. alternatively. The list form is more appropriate when the
  43. parameters are built from other REDUCE calculations in an
  44. automatic style, while the flat form is more convenient
  45. for direct interactive input.
  46. \subsection{Accuracy Control}
  47. The keyword parameters $accuracy=a$ and $iterations=i$, where
  48. $a$ and $i$ must be positive integer numbers, control the
  49. iterative algorithms: the iteration is continued until
  50. the local error is below $10^{-a}$; if that is impossible
  51. within $i$ steps, the iteration is terminated with an
  52. error message. The values reached so far are then returned
  53. as the result.
  54. \subsection{tracing}
  55. Normally the algorithms produce only a minimum of printed
  56. output during their operation. In cases of an unsuccessful
  57. or unexpected long operation a trace of the iteration can be
  58. printed by setting
  59. \begin{verbatim}
  60. on trnumeric;
  61. \end{verbatim}
  62. \section{Minima}
  63. The Fletcher Reeves version of the $steepest\ descent$
  64. algorithms is used to find the minimum of a
  65. function of one or more variables. The
  66. function must have continuous partial derivatives with respect to all
  67. variables. The starting point of the search can be
  68. specified; if not, random values are taken instead.
  69. The steepest descent algorithms in general find only local
  70. minima.
  71. Syntax:
  72. \begin{description}
  73. \item[NUM\_MIN] $(exp, var_1[=val_1] [,var_2[=val_2] \ldots]$
  74. $ [,accuracy=a][,iterations=i]) $
  75. or
  76. \item[NUM\_MIN] $(exp, \{ var_1[=val_1] [,var_2[=val_2] \ldots] \}$
  77. $ [,accuracy=a][,iterations=i]) $
  78. where $exp$ is a function expression,
  79. $var_1, var_2, \ldots$ are the variables in $exp$ and
  80. $val_1,val_2, \ldots$ are the (optional) start values.
  81. NUM\_MIN tries to find the next local minimum along the descending
  82. path starting at the given point. The result is a list
  83. with the minimum function value as first element followed by a list
  84. of equations, where the variables are equated to the coordinates
  85. of the result point.
  86. \end{description}
  87. Examples:
  88. \begin{verbatim}
  89. num_min(sin(x)+x/5, x);
  90. {4.9489585606,{X=29.643767785}}
  91. num_min(sin(x)+x/5, x=0);
  92. { - 1.3342267466,{X= - 1.7721582671}}
  93. % Rosenbrock function (well known as hard to minimize).
  94. fktn := 100*(x1**2-x2)**2 + (1-x1)**2;
  95. num_min(fktn, x1=-1.2, x2=1, iterations=200);
  96. {0.00000021870228295,{X1=0.99953284494,X2=0.99906807238}}
  97. \end{verbatim}
  98. \section{Roots of Functions/ Solutions of Equations}
  99. An adaptively damped Newton iteration is used to find
  100. an approximative zero of a function, a function vector or the solution
  101. of an equation or an equation system. Equations are
  102. internally converted to a difference of lhs and rhs such
  103. that the Newton method (=zero detection) can be applied. The expressions
  104. must have continuous derivatives for all variables.
  105. A starting point for the iteration can be given. If not given,
  106. random values are taken instead. If the number of
  107. forms is not equal to the number of variables, the
  108. Newton method cannot be applied. Then the minimum
  109. of the sum of absolute squares is located instead.
  110. With ON COMPLEX solutions with imaginary parts can be
  111. found, if either the expression(s) or the starting point
  112. contain a nonzero imaginary part.
  113. Syntax:
  114. \begin{description}
  115. \item[NUM\_SOLVE] $(exp_1, var_1[=val_1][,accuracy=a][,iterations=i])$
  116. or
  117. \item[NUM\_SOLVE] $(\{exp_1,\ldots,exp_n\},
  118. var_1[=val_1],\ldots,var_1[=val_n]$
  119. \item[\ \ \ \ \ \ \ \ ]$[,accuracy=a][,iterations=i])$
  120. or
  121. \item[NUM\_SOLVE] $(\{exp_1,\ldots,exp_n\},
  122. \{var_1[=val_1],\ldots,var_1[=val_n]\}$
  123. \item[\ \ \ \ \ \ \ \ ]$[,accuracy=a][,iterations=i])$
  124. where $exp_1, \ldots,exp_n$ are function expressions,
  125. $var_1, \ldots, var_n$ are the variables,
  126. $val_1, \ldots, val_n$ are optional start values.
  127. NUM\_SOLVE tries to find a zero/solution of the expression(s).
  128. Result is a list of equations, where the variables are
  129. equated to the coordinates of the result point.
  130. The Jacobian matrix is stored as a side effect in the shared
  131. variable JACOBIAN.
  132. \end{description}
  133. Example:
  134. \begin{verbatim}
  135. num_solve({sin x=cos y, x + y = 1},{x=1,y=2});
  136. {X= - 1.8561957251,Y=2.856195584}
  137. jacobian;
  138. [COS(X) SIN(Y)]
  139. [ ]
  140. [ 1 1 ]
  141. \end{verbatim}
  142. \section{Integrals}
  143. For the numerical evaluation of univariate integrals over a finite
  144. interval the following strategy is used:
  145. \begin{enumerate}
  146. \item If the function has an antiderivative in close form
  147. which is bounded in the integration interval, this
  148. is used.
  149. \item Otherwise a Chebyshev approximation is computed,
  150. starting with order 20, eventually up to order 80.
  151. If that is recognized as sufficiently convergent
  152. it is used for computing the integral by directly
  153. integrating the coefficient sequence.
  154. \item If none of these methods is successful, an
  155. adaptive multilevel quadrature algorithm is used.
  156. \end{enumerate}
  157. For multivariate integrals only the adaptive quadrature is used.
  158. This algorithm tolerates isolated singularities.
  159. The value $iterations$ here limits the number of
  160. local interval intersection levels.
  161. $Accuracy$ is a measure for the relative total discretization
  162. error (comparison of order 1 and order 2 approximations).
  163. Syntax:
  164. \begin{description}
  165. \item[NUM\_INT] $(exp,var_1=(l_1 .. u_1)[,var_2=(l_2 .. u_2)\ldots]$
  166. \item[\ \ \ \ \ \ ]$[,accuracy=a][,iterations=i])$
  167. where $exp$ is the function to be integrated,
  168. $var_1, var_2 , \ldots$ are the integration variables,
  169. $l_1, l_2 , \ldots$ are the lower bounds,
  170. $u_1, u_2 , \ldots$ are the upper bounds.
  171. Result is the value of the integral.
  172. \end{description}
  173. Example:
  174. \begin{verbatim}
  175. num_int(sin x,x=(0 .. pi));
  176. 2.0000010334
  177. \end{verbatim}
  178. \section{Ordinary Differential Equations}
  179. A Runge-Kutta method of order 3 finds an approximate graph for
  180. the solution of a ordinary differential equation
  181. real initial value problem.
  182. Syntax:
  183. \begin{description}
  184. \item[NUM\_ODESOLVE]($exp$,$depvar=dv$,$indepvar$=$(from .. to)$
  185. $ [,accuracy=a][,iterations=i]) $
  186. where
  187. $exp$ is the differential expression/equation,
  188. $depvar$ is an identifier representing the dependent variable
  189. (function to be found),
  190. $indepvar$ is an identifier representing the independent variable,
  191. $exp$ is an equation (or an expression implicitly set to zero) which
  192. contains the first derivative of $depvar$ wrt $indepvar$,
  193. $from$ is the starting point of integration,
  194. $to$ is the endpoint of integration (allowed to be below $from$),
  195. $dv$ is the initial value of $depvar$ in the point $indepvar=from$.
  196. The ODE $exp$ is converted into an explicit form, which then is
  197. used for a Runge Kutta iteration over the given range. The
  198. number of steps is controlled by the value of $i$
  199. (default: 20).
  200. If the steps are too coarse to reach the desired
  201. accuracy in the neighborhood of the starting point, the number is
  202. increased automatically.
  203. Result is a list of pairs, each representing a point of the
  204. approximate solution of the ODE problem.
  205. \end{description}
  206. Example:
  207. \begin{verbatim}
  208. num_odesolve(df(y,x)=y,y=1,x=(0 .. 1), iterations=5);
  209. {{0.0,1.0},{0.2,1.2214},{0.4,1.49181796},{0.6,1.8221064563},
  210. {0.8,2.2255208258},{1.0,2.7182511366}}
  211. \end{verbatim}
  212. Remarks:
  213. \begin{enumerate}
  214. \item[--] If in $exp$ the differential is not isolated on the lefthand side,
  215. please ensure that the dependent variable is explicitly declared
  216. using a \verb+DEPEND+ statement, e.g.
  217. \begin{verbatim}
  218. depend y,x;
  219. \end{verbatim}
  220. otherwise the formal derivative will be computed to zero by REDUCE.
  221. \item[--] The REDUCE package SOLVE is used to convert the form into
  222. an explicit ODE. If that process fails or has no unique result,
  223. the evaluation is stopped with an error message.
  224. \end{enumerate}
  225. \section{Bounds of a Function}
  226. Upper and lower bounds of a real valued function over an
  227. interval or a rectangular multivariate domain are computed
  228. by the operator BOUNDS. The algorithmic basis is the computation
  229. with inequalities: starting from the interval(s) of the
  230. variables, the bounds are propagated in the expression
  231. using the rules for inequality computation. Some knowledge
  232. about the behavior of special functions like ABS, SIN, COS, EXP, LOG,
  233. fractional exponentials etc. is integrated and can be evaluated
  234. if the operator BOUNDS is called with rounded mode on
  235. (otherwise only algebraic evaluation rules are available).
  236. If BOUNDS finds a singularity within an interval, the evaluation
  237. is stopped with an error message indicating the problem part
  238. of the expression.
  239. Syntax:
  240. \begin{description}
  241. \item[BOUNDS]$(exp,var_1=(l_1 .. u_1) [,var_2=(l_2 .. u_2) \ldots])$
  242. \item[{\it BOUNDS}]$(exp,\{var_1=(l_1 .. u_1) [,var_2=(l_2 .. u_2)\ldots]\})$
  243. where $exp$ is the function to be investigated,
  244. $var_1, var_2 , \ldots$ are the variables of exp,
  245. $l_1, l_2 , \ldots$ and $u_1, u_2 , \ldots$ specify the area (intervals).
  246. $BOUNDS$ computes upper and lower bounds for the expression in the
  247. given area. An interval is returned.
  248. \end{description}
  249. Example:
  250. \begin{verbatim}
  251. bounds(sin x,x=(1 .. 2));
  252. {-1,1}
  253. on rounded;
  254. bounds(sin x,x=(1 .. 2));
  255. 0.84147098481 .. 1
  256. bounds(x**2+x,x=(-0.5 .. 0.5));
  257. - 0.25 .. 0.75
  258. \end{verbatim}
  259. \section{Chebyshev Curve Fitting}
  260. The operator family $Chebyshev\_\ldots$ implements approximation
  261. and evaluation of functions by the Chebyshev method.
  262. Let $T_n^{(a,b)}(x)$ be the Chebyshev polynomial of order $n$
  263. transformed to the interval $(a,b)$. Then a function $f(x)$ can be
  264. approximated in $(a,b)$ by a series
  265. $f(x) \approx \sum_{i=0}^N c_i T_i^{(a,b)}(x)$
  266. The operator $Chebyshev\_fit$ computes this approximation and
  267. returns a list, which has as first element the sum expressed
  268. as a polynomial and as second element the sequence
  269. of Chebyshev coefficients ${c_i}$.
  270. $Chebyshev\_df$ and $Chebyshev\_int$ transform a Chebyshev
  271. coefficient list into the coefficients of the corresponding
  272. derivative or integral respectively. For evaluating a Chebyshev
  273. approximation at a given point in the basic interval the
  274. operator $Chebyshev\_eval$ can be used. Note that
  275. $Chebyshev\_eval$ is based on a recurrence relation which is
  276. in general more stable than a direct evaluation of the
  277. complete polynomial.
  278. \begin{description}
  279. \item[CHEBYSHEV\_FIT] $(fcn,var=(lo .. hi),n)$
  280. \item[CHEBYSHEV\_EVAL] $(coeffs,var=(lo .. hi),var=pt)$
  281. \item[CHEBYSHEV\_DF] $(coeffs,var=(lo .. hi))$
  282. \item[CHEBYSHEV\_INT] $(coeffs,var=(lo .. hi))$
  283. where $fcn$ is an algebraic expression (the function to be
  284. fitted), $var$ is the variable of $fcn$, $lo$ and $hi$ are
  285. numerical real values which describe an interval ($lo < hi$),
  286. $n$ is the approximation order,an integer $>0$, set to 20 if missing,
  287. $pt$ is a numerical value in the interval and $coeffs$ is
  288. a series of Chebyshev coefficients, computed by one of
  289. $CHEBYSHEV\_COEFF$, $\_DF$ or $\_INT$.
  290. \end{description}
  291. Example:
  292. \begin{verbatim}
  293. on rounded;
  294. w:=chebyshev_fit(sin x/x,x=(1 .. 3),5);
  295. 3 2
  296. w := {0.03824*x - 0.2398*x + 0.06514*x + 0.9778,
  297. {0.8991,-0.4066,-0.005198,0.009464,-0.00009511}}
  298. chebyshev_eval(second w, x=(1 .. 3), x=2.1);
  299. 0.4111
  300. \end{verbatim}
  301. \section{General Curve Fitting}
  302. The operator $NUM\_FIT$ finds for a set of
  303. points the linear combination of a given set of
  304. functions (function basis) which approximates the
  305. points best under the objective of the least squares
  306. criterion (minimum of the sum of the squares of the deviation).
  307. The solution is found as zero of the
  308. gradient vector of the sum of squared errors.
  309. Syntax:
  310. \begin{description}
  311. \item[NUM\_FIT] $(vals,basis,var=pts)$
  312. where $vals$ is a list of numeric values,
  313. $var$ is a variable used for the approximation,
  314. $pts$ is a list of coordinate values which correspond to $var$,
  315. $basis$ is a set of functions varying in $var$ which is used
  316. for the approximation.
  317. \end{description}
  318. The result is a list containing as first element the
  319. function which approximates the given values, and as
  320. second element a list of coefficients which were used
  321. to build this function from the basis.
  322. Example:
  323. \begin{verbatim}
  324. % approximate a set of factorials by a polynomial
  325. pts:=for i:=1 step 1 until 5 collect i$
  326. vals:=for i:=1 step 1 until 5 collect
  327. for j:=1:i product j$
  328. num_fit(vals,{1,x,x**2},x=pts);
  329. 2
  330. {14.571428571*X - 61.428571429*X + 54.6,{54.6,
  331. - 61.428571429,14.571428571}}
  332. num_fit(vals,{1,x,x**2,x**3,x**4},x=pts);
  333. 4 3
  334. {2.2083333234*X - 20.249999879*X
  335. 2
  336. + 67.791666154*X - 93.749999133*X
  337. + 44.999999525,
  338. {44.999999525, - 93.749999133,67.791666154,
  339. - 20.249999879,2.2083333234}}
  340. \end{verbatim}
  341. \section{Function Bases}
  342. The following procedures compute sets of functions
  343. e.g. to be used for approximation.
  344. All procedures have
  345. two parameters, the expression to be used as $variable$
  346. (an identifier in most cases) and the
  347. order of the desired system.
  348. The functions are not scaled to a specific interval, but
  349. the $variable$ can be accompanied by a scale factor
  350. and/or a translation
  351. in order to map the generic interval of orthogonality to another
  352. (e.g. $(x- 1/2 ) * 2 pi$).
  353. The result is a function list with ascending order, such that
  354. the first element is the function of order zero and (for
  355. the polynomial systems) the function of order $n$ is the $n+1$-th
  356. element.
  357. \begin{verbatim}
  358. monomial_base(x,n) {1,x,...,x**n}
  359. trigonometric_base(x,n) {1,sin x,cos x,sin(2x),cos(2x)...}
  360. Bernstein_base(x,n) Bernstein polynomials
  361. Legendre_base(x,n) Legendre polynomials
  362. Laguerre_base(x,n) Laguerre polynomials
  363. Hermite_base(x,n) Hermite polynomials
  364. Chebyshev_base_T(x,n) Chebyshev polynomials first kind
  365. Chebyshev_base_U(x,n) Chebyshev polynomials second kind
  366. \end{verbatim}
  367. Example:
  368. \begin{verbatim}
  369. Bernstein_base(x,5);
  370. 5 4 3 2
  371. { - X + 5*X - 10*X + 10*X - 5*X + 1,
  372. 4 3 2
  373. 5*X*(X - 4*X + 6*X - 4*X + 1),
  374. 2 3 2
  375. 10*X *( - X + 3*X - 3*X + 1),
  376. 3 2
  377. 10*X *(X - 2*X + 1),
  378. 4
  379. 5*X *( - X + 1),
  380. 5
  381. X }
  382. \end{verbatim}
  383. \end{document}