numeric.tex 14 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496
  1. \chapter{NUMERIC: Solving numerical problems}
  2. \label{NUMERIC}
  3. \typeout{{NUMERIC: Solving numerical problems}}
  4. {\footnotesize
  5. \begin{center}
  6. Herbert Melenk \\
  7. Konrad--Zuse--Zentrum f\"ur Informationstechnik Berlin \\
  8. Takustra\"se 7 \\
  9. D--14195 Berlin--Dahlem, Germany \\[0.05in]
  10. e--mail: melenk@zib.de
  11. \end{center}
  12. }
  13. \ttindex{NUMERIC}
  14. \ttindex{NUM\_SOLVE}\index{Newton's method}\ttindex{NUM\_ODESOLVE}
  15. \ttindex{BOUNDS}\index{Chebyshev fit}
  16. \ttindex{NUM\_MIN}\index{Minimum}\ttindex{NUM\_INT}\index{Quadrature}
  17. The {\small NUMERIC} package implements some numerical (approximative)
  18. algorithms for \REDUCE\, based on the \REDUCE\ rounded mode
  19. arithmetic. These algorithms are implemented for standard cases. They
  20. should not be called for ill-conditioned problems; please use standard
  21. mathematical libraries for these.
  22. \section{Syntax}
  23. \subsection{Intervals, Starting Points}
  24. Intervals are generally coded as lower bound and
  25. upper bound connected by the operator \verb+`..'+, usually
  26. associated to a variable in an
  27. equation.\index{Interval}
  28. \begin{verbatim}
  29. x= (2.5 .. 3.5)
  30. \end{verbatim}
  31. means that the variable x is taken in the range from 2.5 up to
  32. 3.5. Note, that the bounds can be algebraic
  33. expressions, which, however, must evaluate to numeric results.
  34. In cases where an interval is returned as the result, the lower
  35. and upper bounds can be extracted by the \verb+PART+ operator
  36. as the first and second part respectively.
  37. A starting point is specified by an equation with a numeric
  38. righthand side,
  39. \begin{verbatim}
  40. x=3.0
  41. \end{verbatim}
  42. If for multivariate applications several coordinates must be
  43. specified by intervals or as a starting point, these
  44. specifications can be collected in one parameter (which is then
  45. a list) or they can be given as separate parameters
  46. alternatively. The list form is more appropriate when the
  47. parameters are built from other \REDUCE\ calculations in an
  48. automatic style, while the flat form is more convenient
  49. for direct interactive input.
  50. \subsection{Accuracy Control}
  51. The keyword parameters $accuracy=a$ and $iterations=i$, where
  52. $a$ and $i$ must be positive integer numbers, control the
  53. iterative algorithms: the iteration is continued until
  54. the local error is below $10^{-a}$; if that is impossible
  55. within $i$ steps, the iteration is terminated with an
  56. error message. The values reached so far are then returned
  57. as the result.
  58. \section{Minima}
  59. The function to be minimised must have continuous partial derivatives
  60. with respect to all variables. The starting point of the search can
  61. be specified; if not, random values are taken instead. The steepest
  62. descent algorithms in general find only local minima.
  63. Syntax:\ttindex{NUM\_MIN}
  64. \begin{description}
  65. \item[NUM\_MIN] $(exp, var_1[=val_1] [,var_2[=val_2] \ldots]$
  66. $ [,accuracy=a][,iterations=i]) $
  67. or
  68. \item[NUM\_MIN] $(exp, \{ var_1[=val_1] [,var_2[=val_2] \ldots] \}$
  69. $ [,accuracy=a][,iterations=i]) $
  70. where $exp$ is a function expression,
  71. $var_1, var_2, \ldots$ are the variables in $exp$ and
  72. $val_1,val_2, \ldots$ are the (optional) start values.
  73. NUM\_MIN tries to find the next local minimum along the descending
  74. path starting at the given point. The result is a list
  75. with the minimum function value as first element followed by a list
  76. of equations, where the variables are equated to the coordinates
  77. of the result point.
  78. \end{description}
  79. Examples:
  80. \begin{verbatim}
  81. num_min(sin(x)+x/5, x);
  82. {4.9489585606,{X=29.643767785}}
  83. num_min(sin(x)+x/5, x=0);
  84. { - 1.3342267466,{X= - 1.7721582671}}
  85. % Rosenbrock function (well known as hard to minimize).
  86. fktn := 100*(x1**2-x2)**2 + (1-x1)**2;
  87. num_min(fktn, x1=-1.2, x2=1, iterations=200);
  88. {0.00000021870228295,{X1=0.99953284494,X2=0.99906807238}}
  89. \end{verbatim}
  90. \section{Roots of Functions/ Solutions of Equations}
  91. An adaptively damped Newton iteration is used to find an approximative
  92. zero of a function, a function vector or the solution of an equation
  93. or an equation system. The expressions must have continuous
  94. derivatives for all variables. A starting point for the iteration can
  95. be given. If not given, random values are taken instead. If the number
  96. of forms is not equal to the number of variables, the Newton method
  97. cannot be applied. Then the minimum of the sum of absolute squares is
  98. located instead.
  99. With {\tt ON COMPLEX} solutions with imaginary parts can be
  100. found, if either the expression(s) or the starting point
  101. contain a nonzero imaginary part.
  102. Syntax:\ttindex{NUM\_SOLVE}
  103. \begin{description}
  104. \item[NUM\_SOLVE] $(exp_1, var_1[=val_1][,accuracy=a][,iterations=i])$
  105. or
  106. \item[NUM\_SOLVE] $(\{exp_1,\ldots,exp_n\},
  107. var_1[=val_1],\ldots,var_1[=val_n]$
  108. \item[\ \ \ \ \ \ \ \ ]$[,accuracy=a][,iterations=i])$
  109. or
  110. \item[NUM\_SOLVE] $(\{exp_1,\ldots,exp_n\},
  111. \{var_1[=val_1],\ldots,var_1[=val_n]\}$
  112. \item[\ \ \ \ \ \ \ \ ]$[,accuracy=a][,iterations=i])$
  113. where $exp_1, \ldots,exp_n$ are function expressions,
  114. $var_1, \ldots, var_n$ are the variables,
  115. $val_1, \ldots, val_n$ are optional start values.
  116. NUM\_SOLVE tries to find a zero/solution of the expression(s).
  117. Result is a list of equations, where the variables are
  118. equated to the coordinates of the result point.
  119. The Jacobian matrix is stored as a side effect in the shared
  120. variable JACOBIAN.\ttindex{JACOBIAN}
  121. \end{description}
  122. Example:
  123. \begin{verbatim}
  124. num_solve({sin x=cos y, x + y = 1},{x=1,y=2});
  125. {X= - 1.8561957251,Y=2.856195584}
  126. jacobian;
  127. [COS(X) SIN(Y)]
  128. [ ]
  129. [ 1 1 ]
  130. \end{verbatim}
  131. \section{Integrals}
  132. Numerical integration uses a polyalgorithm, explained in the full
  133. documentation.\ttindex{NUM\_INT}
  134. \begin{description}
  135. \item[NUM\_INT] $(exp,var_1=(l_1 .. u_1)[,var_2=(l_2 .. u_2)\ldots]$
  136. \item[\ \ \ \ \ \ ]$[,accuracy=a][,iterations=i])$
  137. where $exp$ is the function to be integrated,
  138. $var_1, var_2 , \ldots$ are the integration variables,
  139. $l_1, l_2 , \ldots$ are the lower bounds,
  140. $u_1, u_2 , \ldots$ are the upper bounds.
  141. Result is the value of the integral.
  142. \end{description}
  143. Example:
  144. \begin{verbatim}
  145. num_int(sin x,x=(0 .. pi));
  146. 2.0000010334
  147. \end{verbatim}
  148. \section{Ordinary Differential Equations}
  149. A Runge-Kutta method of order 3 finds an approximate graph for
  150. the solution of a ordinary differential equation
  151. real initial value problem.
  152. Syntax:\ttindex{NUM\_ODESOLVE}
  153. \begin{description}
  154. \item[NUM\_ODESOLVE]($exp$,$depvar=dv$,$indepvar$=$(from .. to)$
  155. $ [,accuracy=a][,iterations=i]) $
  156. where
  157. $exp$ is the differential expression/equation,
  158. $depvar$ is an identifier representing the dependent variable
  159. (function to be found),
  160. $indepvar$ is an identifier representing the independent variable,
  161. $exp$ is an equation (or an expression implicitly set to zero) which
  162. contains the first derivative of $depvar$ wrt $indepvar$,
  163. $from$ is the starting point of integration,
  164. $to$ is the endpoint of integration (allowed to be below $from$),
  165. $dv$ is the initial value of $depvar$ in the point $indepvar=from$.
  166. The ODE $exp$ is converted into an explicit form, which then is
  167. used for a Runge-Kutta iteration over the given range. The
  168. number of steps is controlled by the value of $i$
  169. (default: 20).
  170. If the steps are too coarse to reach the desired
  171. accuracy in the neighbourhood of the starting point, the number is
  172. increased automatically.
  173. Result is a list of pairs, each representing a point of the
  174. approximate solution of the ODE problem.
  175. \end{description}
  176. Example:
  177. \begin{verbatim}
  178. num_odesolve(df(y,x)=y,y=1,x=(0 .. 1), iterations=5);
  179. {{0.0,1.0},{0.2,1.2214},{0.4,1.49181796},{0.6,1.8221064563},
  180. {0.8,2.2255208258},{1.0,2.7182511366}}
  181. \end{verbatim}
  182. \section{Bounds of a Function}
  183. Upper and lower bounds of a real valued function over an
  184. interval or a rectangular multivariate domain are computed
  185. by the operator \f{BOUNDS}. Some knowledge
  186. about the behaviour of special functions like ABS, SIN, COS, EXP, LOG,
  187. fractional exponentials etc. is integrated and can be evaluated
  188. if the operator BOUNDS is called with rounded mode on
  189. (otherwise only algebraic evaluation rules are available).
  190. If BOUNDS finds a singularity within an interval, the evaluation
  191. is stopped with an error message indicating the problem part
  192. of the expression.
  193. \newpage
  194. Syntax:\ttindex{BOUNDS}
  195. \begin{description}
  196. \item[BOUNDS]$(exp,var_1=(l_1 .. u_1) [,var_2=(l_2 .. u_2) \ldots])$
  197. \item[{\it BOUNDS}]$(exp,\{var_1=(l_1 .. u_1) [,var_2=(l_2 .. u_2)\ldots]\})$
  198. where $exp$ is the function to be investigated,
  199. $var_1, var_2 , \ldots$ are the variables of exp,
  200. $l_1, l_2 , \ldots$ and $u_1, u_2 , \ldots$ specify the area (intervals).
  201. {\tt BOUNDS} computes upper and lower bounds for the expression in the
  202. given area. An interval is returned.
  203. \end{description}
  204. Example:
  205. \begin{verbatim}
  206. bounds(sin x,x=(1 .. 2));
  207. {-1,1}
  208. on rounded;
  209. bounds(sin x,x=(1 .. 2));
  210. 0.84147098481 .. 1
  211. bounds(x**2+x,x=(-0.5 .. 0.5));
  212. - 0.25 .. 0.75
  213. \end{verbatim}
  214. \section{Chebyshev Curve Fitting}
  215. The operator family $Chebyshev\_\ldots$ implements approximation
  216. and evaluation of functions by the Chebyshev method.
  217. The operator {\tt Chebyshev\_fit}\ttindex{Chebyshev\_fit} computes
  218. this approximation and returns a list, which has as first element the
  219. sum expressed as a polynomial and as second element the sequence of
  220. Chebyshev coefficients ${c_i}$. {\tt
  221. Chebyshev\_df}\ttindex{Chebyshev\_df} and {\tt
  222. Chebyshev\_int}\ttindex{Chebyshev\_int} transform a Chebyshev
  223. coefficient list into the coefficients of the corresponding derivative
  224. or integral respectively. For evaluating a Chebyshev approximation at
  225. a given point in the basic interval the operator {\tt
  226. Chebyshev\_eval}\ttindex{Chebyshev\_eval} can be used. Note that {\tt
  227. Chebyshev\_eval} is based on a recurrence relation which is in general
  228. more stable than a direct evaluation of the complete polynomial.
  229. \begin{description}
  230. \item[CHEBYSHEV\_FIT] $(fcn,var=(lo .. hi),n)$
  231. \item[CHEBYSHEV\_EVAL] $(coeffs,var=(lo .. hi),var=pt)$
  232. \item[CHEBYSHEV\_DF] $(coeffs,var=(lo .. hi))$
  233. \item[CHEBYSHEV\_INT] $(coeffs,var=(lo .. hi))$
  234. where $fcn$ is an algebraic expression (the function to be
  235. fitted), $var$ is the variable of $fcn$, $lo$ and $hi$ are
  236. numerical real values which describe an interval ($lo < hi$),
  237. $n$ is the approximation order,an integer $>0$, set to 20 if missing,
  238. $pt$ is a numerical value in the interval and $coeffs$ is
  239. a series of Chebyshev coefficients, computed by one of
  240. $CHEBYSHEV\_COEFF$, $\_DF$ or $\_INT$.
  241. \end{description}
  242. Example:
  243. \begin{verbatim}
  244. on rounded;
  245. w:=chebyshev_fit(sin x/x,x=(1 .. 3),5);
  246. 3 2
  247. w := {0.03824*x - 0.2398*x + 0.06514*x + 0.9778,
  248. {0.8991,-0.4066,-0.005198,0.009464,-0.00009511}}
  249. chebyshev_eval(second w, x=(1 .. 3), x=2.1);
  250. 0.4111
  251. \end{verbatim}
  252. \section{General Curve Fitting}
  253. The operator {\tt NUM\_FIT}\ttindex{NUM\_FIT} finds for a set of
  254. points the linear combination of a given set of
  255. functions (function basis) which approximates the
  256. points best under the objective of the least squares
  257. criterion (minimum of the sum of the squares of the deviation).
  258. The solution is found as zero of the
  259. gradient vector of the sum of squared errors.
  260. Syntax:
  261. \begin{description}
  262. \item[NUM\_FIT] $(vals,basis,var=pts)$
  263. where $vals$ is a list of numeric values,
  264. $var$ is a variable used for the approximation,
  265. $pts$ is a list of coordinate values which correspond to $var$,
  266. $basis$ is a set of functions varying in $var$ which is used
  267. for the approximation.
  268. \end{description}
  269. The result is a list containing as first element the
  270. function which approximates the given values, and as
  271. second element a list of coefficients which were used
  272. to build this function from the basis.
  273. Example:
  274. \begin{verbatim}
  275. % approximate a set of factorials by a polynomial
  276. pts:=for i:=1 step 1 until 5 collect i$
  277. vals:=for i:=1 step 1 until 5 collect
  278. for j:=1:i product j$
  279. num_fit(vals,{1,x,x**2},x=pts);
  280. 2
  281. {14.571428571*X - 61.428571429*X + 54.6,{54.6,
  282. - 61.428571429,14.571428571}}
  283. num_fit(vals,{1,x,x**2,x**3,x**4},x=pts);
  284. 4 3
  285. {2.2083333234*X - 20.249999879*X
  286. 2
  287. + 67.791666154*X - 93.749999133*X
  288. + 44.999999525,
  289. {44.999999525, - 93.749999133,67.791666154,
  290. - 20.249999879,2.2083333234}}
  291. \end{verbatim}
  292. \section{Function Bases}
  293. The following procedures compute sets of functions
  294. for example to be used for approximation.
  295. All procedures have
  296. two parameters, the expression to be used as $variable$
  297. (an identifier in most cases) and the
  298. order of the desired system.
  299. The functions are not scaled to a specific interval, but
  300. the $variable$ can be accompanied by a scale factor
  301. and/or a translation
  302. in order to map the generic interval of orthogonality to another
  303. ({\em e.g.\ }$(x- 1/2 ) * 2 pi$).
  304. The result is a function list with ascending order, such that
  305. the first element is the function of order zero and (for
  306. the polynomial systems) the function of order $n$ is the $n+1$-th
  307. element.
  308. \ttindex{monomial\_base}\ttindex{trigonometric\_base}\ttindex{Bernstein\_base}
  309. \ttindex{Legendre\_base}\ttindex{Laguerre\_base}\ttindex{Hermite\_base}
  310. \ttindex{Chebyshev\_base\_T}\ttindex{Chebyshev\_base\_U}
  311. \begin{verbatim}
  312. monomial_base(x,n) {1,x,...,x**n}
  313. trigonometric_base(x,n) {1,sin x,cos x,sin(2x),cos(2x)...}
  314. Bernstein_base(x,n) Bernstein polynomials
  315. Legendre_base(x,n) Legendre polynomials
  316. Laguerre_base(x,n) Laguerre polynomials
  317. Hermite_base(x,n) Hermite polynomials
  318. Chebyshev_base_T(x,n) Chebyshev polynomials first kind
  319. Chebyshev_base_U(x,n) Chebyshev polynomials second kind
  320. \end{verbatim}
  321. Example:
  322. \begin{verbatim}
  323. Bernstein_base(x,5);
  324. 5 4 3 2
  325. { - X + 5*X - 10*X + 10*X - 5*X + 1,
  326. 4 3 2
  327. 5*X*(X - 4*X + 6*X - 4*X + 1),
  328. 2 3 2
  329. 10*X *( - X + 3*X - 3*X + 1),
  330. 3 2
  331. 10*X *(X - 2*X + 1),
  332. 4
  333. 5*X *( - X + 1),
  334. 5
  335. X }
  336. \end{verbatim}