chap1.r_0 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317
  1. .nr chapter 0
  2. .chapter "Introduction"
  3. .section "Motivation"
  4. In the development of our understanding of complex phenomena,
  5. the most powerful tool available to enhance our comprehension is
  6. abstraction. Abstraction arises from the recognition of similarities
  7. between certain objects or processes, and the decision to
  8. concentrate on these correspondences and to ignore, for the
  9. present, their differences [Hoare 72]. In focusing on similarities,
  10. one tends to regard them as fundamental and intrinsic, and to view
  11. the differences as trivial.
  12. Abstraction from all nonessentials can uncover the kernel of
  13. a subject matter and be of great aid in those cases where a
  14. decisive role is played by the properties picked out and
  15. preserved by the abstraction. But, chiefly, what we desire in an
  16. abstraction is a mechanism which permits the expression of
  17. relevant details and the suppression of irrelevant details
  18. [Liskov and Zilles 74]. In this manner, a good abstraction allows ont to
  19. concenetrate on the essentials of a task without regard to unimportant factors.
  20. One of the earliest recognized and most useful aids to
  21. abstraction in programming is the self-contained subroutine or
  22. procedure. Procedures appeared as early as 1945 in Zuse's
  23. programming language, Plancalculus [Knuth 76]. Besides, early
  24. developers of programming languages recognized the utility of the
  25. concept of a procedure. Curry, in 1950, described the advantages
  26. of including procedures in the programming languages being
  27. developed at that time by pointing out that the decomposition
  28. mechanism provided by a procedure would allow keener insight into
  29. a problem by permitting consideration of its separate, distinct
  30. parts [Curry 50].
  31. The existence of procedures goes quite far toward capturing
  32. the meaning of abstraction [Liskov and Zilles 74]. At the point
  33. of its invocation, a procedure may be treated as a "black box",
  34. which performs a specific function by means of an unprescribed
  35. algorithm. Thus, at the level of its invocation, a procedure
  36. separates the relevant detail of what it accomplishes from the
  37. irrelevant detail of how it is implemented. Furthermore, at the
  38. level of its implementation, a procedure facilitates understanding
  39. of how it accomplishes its task by freeing the programmer from
  40. considering why it is invoked.
  41. However, procedures alone do not provide a sufficiently rich
  42. vocabulary of abstractions [Liskov and Zilles 75]. Procedures,
  43. while well suited to the description of abstract processes or
  44. events, do not accommodate the description of abstract objects. To
  45. alleviate this problem the concept of a data abstraction was
  46. introduced. This comprises a group of related functions or
  47. operations which act upon a particular class of objects with the
  48. constraint that objects in this class can only be observed or
  49. modified by the application of its related operations [Liskov and
  50. Zilles 75].
  51. A typical example of a data abstraction is a "push down
  52. stack". Here, the class of objects consists of all possible stacks
  53. and the collection of related operations includes the usual stack
  54. operations, like push and pop, and an operation to create new
  55. stacks.
  56. A data abstraction provides the same aids to abstraction as a
  57. procedure and allows one to separate the implementation details of
  58. a data object from its behavior. At the level of its use or
  59. invocation, the data abstraction isolates the attributes that
  60. specify the names and define the abstract meaning of the associated
  61. operations. At the level of its implementation, the data
  62. abstraction isolates the attributes that describe the representation
  63. of objects and the implementation of the operations that
  64. act upon these objects. Though these different attributes of use
  65. and implementation are, in practice, highly interdependent, they
  66. represent logically independent concepts [Guttag 75].
  67. My main concern is the first group of attributes which deal
  68. with the specification of the data abstraction. Specification is
  69. important because it describes the abstract object which has been
  70. conceived in someone's mind. It can be used as a communication
  71. medium among designers and implementors to insure that an
  72. implementor understands the designer's intentions about the data
  73. abstraction he is coding [Liskov and Zilles 75].
  74. Moreover, if a formal specification technique, one with
  75. an explicitly and precisely defined syntax and semantics, is used,
  76. even further benefits can be derived. Formal specifications can be
  77. studied mathematically so that questions, such as the equivalence
  78. of two different specifications, may be posed and rigorously
  79. answered. Also, formal specifications can serve as the basis for
  80. proofs of correctness of programs. If a programming language's
  81. semantics are defined formally [Marcotty`76], properties of a
  82. program written in this language can be formally proved. The
  83. correctness of the program can then be proved by establishing the
  84. equivalence of these properties and the specification. Finally,
  85. formal specifications can be meaningfully processed by a computer
  86. [Liskov and Zilles 75], [Liskov and Berzins 76]. Since this processing
  87. can be done in advance of implementation, it can provide
  88. design and configuration guidelines during program development.
  89. .section "A Prelude to The Parnas Module Specification Technique"
  90. The information contained in the specification of a data
  91. abstraction can de divided into a syntactic part and a semantic
  92. part [Liskov and Zilles 75]. The syntactic part provides a vocabulary
  93. of terms or symbols which are used by the semantic part to
  94. express the actual meaning or behavior of the data abstraction.
  95. Two different approaches are used in capturing this meaning;
  96. either an explicit, abstract model is supplied for the class of
  97. objects and its associated operations are defined in terms of this
  98. model, or the class of objects is defined implicitly via descriptions
  99. of the operations [Liskov and Zilles 75].
  100. Parnas [Parnas 72b] has developed a technique and notation,
  101. called a Parnas module, for writing specifications based on the
  102. implicit approach. His specification scheme was devised with the
  103. following goals in mind [Parnas 72b]:
  104. .nofill_list
  105. 1) The specification must provide to the intended user
  106. all the information that he will need to correctly use
  107. the object specified, and nothing more.
  108. .next
  109. 2) The specification must provide to the implementor all
  110. the information about the intended use of the object
  111. specified that he needs to implement the specification,
  112. and no additional information.
  113. .next
  114. 3) The specification should discuss the object specified
  115. in the terms normally used by user and implementor alike
  116. rather than in some other area of discourse.
  117. .end_list
  118. When using the Parnas technique, each data object is viewed
  119. as the state of an abstract (and not necessarily finite) state
  120. machine and, in the Parnas module, this state set is implicitly
  121. defined. The basic idea is to separate the operations of the data
  122. abstraction into two distinct groups; those which do not change
  123. the state but allow some aspect of the state to be observed, the
  124. value returning or V-functions, and those which change the state,
  125. the operation or O-functions. The specifications are then written
  126. by stating the effect of each O-function on the result of each
  127. V-function. This implicitly defines the smallest set of states
  128. necessary to distinguish the variations in the results of the
  129. V-functions [Liskov and Zilles 75].
  130. A problem with this approach is that certain O-functions may
  131. have delayed effects on the V-functions. In other words, some
  132. property of the state will be observed by a V-function only after
  133. some O-function has been used. For example, consider a push down
  134. stack with the operations PUSH, POP and TOP. PUSH has a delayed
  135. effect on TOP in the sense that after a new element has been
  136. pushed on the stack, the former top of the stack element is no
  137. longer observable by TOP but it will be if POP is used.
  138. Parnas used an informal language to express these delayed
  139. effects [Parnas 72b,75]. In his specifications, he included a
  140. section, called Module Properties, for describing delayed effects
  141. in English, at times interlaced with simple mathematical formulae
  142. [Parnas 75]. For example, to specify the interaction of push and
  143. pop on a stack, Parnas used the phrase "The sequence PUSH(a);POP
  144. has no net effect if no error calls occur" [Parnas 75].
  145. One method to formally describe delayed effects is to introduce
  146. hidden V-functions to represent aspects of the state which
  147. are not immediately observable. For example, in the specification of a push down stack,
  148. one could introduce a hidden V-function to store the former top of the stack element.
  149. This approach has been followed
  150. by researchers at the Stanford Research Institute [Robinson
  151. 75a],[Spitzen 76]. However, their main concern with Parnas modules
  152. is their use in a general methodology for the design, implementation
  153. and proof of large software systems [Robinson 75b],
  154. [Neumann 74]. With this goal in mind, they have designed a
  155. specification language, called SPECIAL, for describing Parnas
  156. modules [Roubine 76]. But, no formal semantics have been provided
  157. for SPECIAL.
  158. An example of a Parnas module specification using hidden
  159. V-functions is given below in figure 1. Here, the data abstraction
  160. defined is a bounded integer stack with the following operations;
  161. .ls 1
  162. .nofill_list
  163. push pushes an integer on a stack
  164. .next
  165. pop pops the top of a stack
  166. .next
  167. top returns the integer on top of a stack
  168. .next
  169. depth returns the number of elements in a stack
  170. .end_list
  171. .begin_figure "Bounded Integer Stack"
  172. .ls 1
  173. .sp 2
  174. bounded_stack = 1Parnas_module is* push, pop, top, depth
  175. depth = 1non-derived V-function*( ) 1returns* integer
  176. 1Appl. Cond.: true*
  177. 1Initial Value*: 0
  178. 1end* depth
  179. stack = 1hidden V-function*(i:integer) 1returns* integer
  180. 1Appl. Cond.: true*
  181. 1Initial Value*: undefined
  182. 1end* stack
  183. top = 1derived V-function*( ) 1returns* integer
  184. 1Appl. Cond.*: ~(depth = 0)
  185. 1Derivation*: top = stack(depth)
  186. 1end* top
  187. pop = 1O-function*( )
  188. 1Appl. Cond.*: ~(depth = 0)
  189. 1Effects*: 'depth' = depth - 1
  190. 1end* pop
  191. push = 1O-function*(a:integer)
  192. 1Appl. Cond.*: depth < 100
  193. 1Effects*: 'depth' = depth + 1
  194. 'stack'(depth + 1) = a
  195. 1end* push
  196. 1end* bounded_stack
  197. .finish_figure
  198. A Parnas module specification consists of two parts; an
  199. interface description and a collection of function definitions.
  200. The interface description of a Parnas module provides a very
  201. brief description of the interface which the module presents to
  202. the outside environment. It consists of the name of the data
  203. abstraction being specified by the module and a list of the
  204. functions that users of the data abstraction may employ;
  205. bounded_stack = Parnas_module is push,pop,top,depth
  206. The name specified in the interface description, besides
  207. being the name of the data abstraction defined by the module, is
  208. also called, following [Guttag 75], the type of interest. Users
  209. of a particular Parnas module specification view objects of the
  210. type of interest as indivisible, non-decomposable entities.
  211. However, inside the module, objects are viewed as decomposable
  212. states of the abstract machine implicitly defined by the module.
  213. Notice that stack is not included in the interface description.
  214. This means it is hidden from users of the module and
  215. can not be accessed by them. Its inclusion in the body of the
  216. module is necessary to describe the delayed effects discussed
  217. above.
  218. The body of a Parnas module consists of the function
  219. definitions which specify the actions allowed on the type of
  220. interest. A function definition must be given for every action
  221. named in the interface description.
  222. There are two kinds of functions defined in the body of a
  223. Parnas module, V-functions and O-functions.
  224. A V-function is defined by first writing its name and then
  225. its mapping description;
  226. depth = V-function( ) returns integer
  227. The mapping description specifies the type of the function's
  228. parameters and the type of the values it returns. Note that
  229. certain V-functions such as depth and top require no parameters.
  230. Furthermore, the mapping description states which particular type
  231. of V-function, derived, non-derived or hidden, is being defined.
  232. A non-derived or hidden V-function is a primitive, intrinsic
  233. aspect of the type of interest. However, hidden V-functions
  234. differ from non-derived V-functions in that they can not be
  235. accessed by users of the type of interest. Furthermore, they may
  236. not appear in a module's interface description. In contrast,
  237. derived V-functions are not primitive aspects of the type of
  238. interest but are defined in terms of the other V-functions of the
  239. module.
  240. After the mapping description, the applicability condition
  241. of a V-function is defined. This is a Boolean expression which
  242. governs when a call of the V-function succeeds. This expression
  243. may not contain any derived V-functions and must evaluate to true
  244. for any call to succeed. Hidden V-functions are always applicable
  245. (their applicability condition must always be defined as true).
  246. The applicability condition may constrain a V-function so
  247. that it is only a partial function; i.e. it is not defined for
  248. all elements of its domain. When a V-function is passed a set of
  249. arguments for which it is not defined, an error condition is
  250. raised telling the user what has occurred. However, how the error
  251. condition is handled is viewed here as the responsibility of the
  252. user. The Parnas module only specifies when an error condition is
  253. raised.
  254. Finally, a section which determines the specific values
  255. returned by a V-function is given. For a non-derived or hidden
  256. V-function, this is the initial value section which specifies the
  257. value returned by calls of this particular V-function in the
  258. initial state of the abstract machine implicitly defined by the
  259. module. The initial value is the basis for the values returned by
  260. the V-function in the other states of the abstract machine.
  261. .keep
  262. For derived V-functions, a derivation section is given.
  263. Derivation: top=stack(depth)
  264. ..
  265. .end_keep
  266. This section defines the function in terms of the other
  267. non-derived and hidden V-functions of the module.
  268. An O-function is specified by first defining its name and
  269. mapping description
  270. push = O-function(a;integer)
  271. and then specifying its applicability condition. As with
  272. V-functions, a call of an O-function only succeeds if its
  273. applicability condition is satisfied, otherwise an error
  274. condition is raised. O-functions do not return any values. They
  275. only change the state of the abstract machine implicitly defined
  276. by the module.
  277. Now recall that objects of the type of interest are viewed
  278. inside the module as states of this abstract machine. This point
  279. helps to explain the interpretation of the quotes around the
  280. V-functions in an O-function's applicability condition and
  281. effects section. These quotes are used to represent the result
  282. returned by the V-function in the state of the machine before
  283. completion of an O-function call. An unquoted V-function
  284. represents the function in the new state produced after
  285. completion of the call. This new state is defined by the
  286. equations in an O-function's effects section which redefine the
  287. mappings of certain V-functions in terms of the previous state.
  288. Only non-derived or hidden V-functions may appear in an equation
  289. in an O-function's effects section. Furthermore, non-derived or
  290. hidden V-functions which do not appear on the left hand side of
  291. any equation remain unchanged in the new state.
  292. Together, the non-derived V-functions, hidden V-functions and
  293. the O-functions implicitly define the state set of a Parnas
  294. module. Note that since the derived V-functions are defined in
  295. terms of the other V-functions, they are not needed to define
  296. this state set. Their main use is to allow users of the module
  297. restricted access to the hidden V-functions. Thus, they help
  298. support Parnas's principle of information hiding [Parnas 72a].