remove-nonfree-references.patch 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346
  1. diff --git a/docs/layout.xml b/docs/layout.xml
  2. index d2f18bc..292e7d8 100644
  3. --- a/docs/layout.xml
  4. +++ b/docs/layout.xml
  5. @@ -5,8 +5,6 @@
  6. <tab type="usergroup" visible="yes" title="Tutorials">
  7. <tab type="user" url="\ref installing" visible="yes" title="Installation"/>
  8. <tab type="user" url="\ref using_on_linux" visible="yes" title="Using on Linux"/>
  9. - <tab type="user" url="\ref using_on_windows" visible="yes" title="Using on Windows"/>
  10. - <tab type="user" url="\ref using_on_osx" visible="yes" title="Using on OSX"/>
  11. <tab type="user" url="\ref gettingstarted" visible="yes" title="Getting Started"/>
  12. <tab type="user" url="\ref vectorization" visible="yes" title="Introduction to Vectorization"/>
  13. <tab type="user" url="\ref matrixmanipulation" visible="yes" title="Array and Matrix Manipulation"/>
  14. diff --git a/docs/pages/README.md b/docs/pages/README.md
  15. index 8a395a7..46011ca 100644
  16. --- a/docs/pages/README.md
  17. +++ b/docs/pages/README.md
  18. @@ -9,10 +9,8 @@ ArrayFire is a high performance software library for parallel computing with an
  19. ## Installing ArrayFire
  20. -You can install ArrayFire using either a binary installer for Windows, OSX,
  21. -or Linux or download it from source:
  22. +You can install ArrayFire using Parabola or download it from source:
  23. -* [Binary installers for Windows, OSX, and Linux](\ref installing)
  24. * [Build from source](https://github.com/arrayfire/arrayfire)
  25. ## Easy to use
  26. @@ -24,7 +22,7 @@ readable math-resembling notation. You _do not_ need expertise in
  27. parallel programming to use ArrayFire.
  28. A few lines of ArrayFire code
  29. -accomplishes what can take 100s of complicated lines in CUDA or OpenCL
  30. +accomplishes what can take 100s of complicated lines in OpenCL
  31. kernels.
  32. ## ArrayFire is extensive!
  33. @@ -56,25 +54,23 @@ unsigned integers.
  34. #### Extending ArrayFire
  35. ArrayFire can be used as a stand-alone application or integrated with
  36. -existing CUDA or OpenCL code. All ArrayFire `arrays` can be
  37. -interchanged with other CUDA or OpenCL data structures.
  38. +existing OpenCL code. All ArrayFire `arrays` can be
  39. +interchanged with other OpenCL data structure.
  40. ## Code once, run anywhere!
  41. -With support for x86, ARM, CUDA, and OpenCL devices, ArrayFire supports for a comprehensive list of devices.
  42. +With support for x86, ARM, and OpenCL devices, ArrayFire supports for a comprehensive list of devices.
  43. Each ArrayFire installation comes with:
  44. - - a CUDA version (named 'libafcuda') for [NVIDIA
  45. - GPUs](https://developer.nvidia.com/cuda-gpus),
  46. - an OpenCL version (named 'libafopencl') for [OpenCL devices](http://www.khronos.org/conformance/adopters/conformant-products#opencl)
  47. - - a CPU version (named 'libafcpu') to fall back to when CUDA or OpenCL devices are not available.
  48. + - a CPU version (named 'libafcpu') to fall back to when OpenCL devices are not available.
  49. ## ArrayFire is highly efficient
  50. #### Vectorized and Batched Operations
  51. ArrayFire supports batched operations on N-dimensional arrays.
  52. -Batch operations in ArrayFire are run in parallel ensuring an optimal usage of your CUDA or OpenCL device.
  53. +Batch operations in ArrayFire are run in parallel ensuring an optimal usage of your OpenCL device.
  54. You can get the best performance out of ArrayFire using [vectorization techniques](\ref vectorization).
  55. @@ -93,7 +89,7 @@ Read more about how [ArrayFire JIT](http://arrayfire.com/performance-of-arrayfir
  56. ## Simple Example
  57. Here's a live example to let you see ArrayFire code. You create [arrays](\ref construct_mat)
  58. -which reside on CUDA or OpenCL devices. Then you can use
  59. +which reside on OpenCL devices. Then you can use
  60. [ArrayFire functions](modules.htm) on those [arrays](\ref construct_mat).
  61. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.cpp}
  62. @@ -144,7 +140,7 @@ Formatted:
  63. BibTeX:
  64. @misc{Yalamanchili2015,
  65. - abstract = {ArrayFire is a high performance software library for parallel computing with an easy-to-use API. Its array based function set makes parallel programming simple. ArrayFire's multiple backends (CUDA, OpenCL and native CPU) make it platform independent and highly portable. A few lines of code in ArrayFire can replace dozens of lines of parallel computing code, saving you valuable time and lowering development costs.},
  66. + abstract = {ArrayFire is a high performance software library for parallel computing with an easy-to-use API. Its array based function set makes parallel programming simple. ArrayFire's multiple backends (OpenCL and native CPU) make it platform independent and highly portable. A few lines of code in ArrayFire can replace dozens of lines of parallel computing code, saving you valuable time and lowering development costs.},
  67. address = {Atlanta},
  68. author = {Yalamanchili, Pavan and Arshad, Umar and Mohammed, Zakiuddin and Garigipati, Pradeep and Entschev, Peter and Kloppenborg, Brian and Malcolm, James and Melonakos, John},
  69. publisher = {AccelerEyes},
  70. diff --git a/docs/pages/configuring_arrayfire_environment.md b/docs/pages/configuring_arrayfire_environment.md
  71. index 33c5a39..36b52b7 100644
  72. --- a/docs/pages/configuring_arrayfire_environment.md
  73. +++ b/docs/pages/configuring_arrayfire_environment.md
  74. @@ -28,19 +28,6 @@ detailed. This helps in locating the exact failure.
  75. AF_PRINT_ERRORS=1 ./myprogram
  76. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  77. -AF_CUDA_DEFAULT_DEVICE {#af_cuda_default_device}
  78. --------------------------------------------------------------------------------
  79. -
  80. -Use this variable to set the default CUDA device. Valid values for this
  81. -variable are the device identifiers shown when af::info is run.
  82. -
  83. -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  84. -AF_CUDA_DEFAULT_DEVICE=1 ./myprogram_cuda
  85. -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  86. -
  87. -Note: af::setDevice call in the source code will take precedence over this
  88. -variable.
  89. -
  90. AF_OPENCL_DEFAULT_DEVICE {#af_opencl_default_device}
  91. -------------------------------------------------------------------------------
  92. diff --git a/docs/pages/getting_started.md b/docs/pages/getting_started.md
  93. index 7ea9a75..21a3dfe 100644
  94. --- a/docs/pages/getting_started.md
  95. +++ b/docs/pages/getting_started.md
  96. @@ -70,14 +70,12 @@ For example:
  97. \snippet test/getting_started.cpp ex_getting_started_init
  98. ArrayFire also supports array initialization from memory already on the GPU.
  99. -For example, with CUDA one can populate an `array` directly using a call
  100. -to `cudaMemcpy`:
  101. +For example:
  102. \snippet test/getting_started.cpp ex_getting_started_dev_ptr
  103. Similar functionality exists for OpenCL too. If you wish to intermingle
  104. -ArrayFire with CUDA or OpenCL code, we suggest you consult the
  105. -[CUDA interoperability](\ref interop_cuda) or
  106. +ArrayFire with OpenCL code, we suggest you consult the
  107. [OpenCL interoperability](\ref interop_opencl) pages for detailed instructions.
  108. # ArrayFire array contents, dimensions, and properties {#getting_started_array_properties}
  109. @@ -111,7 +109,7 @@ full documentation on the [array](\ref af::array).
  110. # Writing mathematical expressions in ArrayFire {#getting_started_writing_math}
  111. ArrayFire features an intelligent Just-In-Time (JIT) compilation engine that
  112. -converts expressions using arrays into the smallest number of CUDA/OpenCL
  113. +converts expressions using arrays into the smallest number of OpenCL
  114. kernels. For most operations on arrays, ArrayFire functions like a vector library.
  115. That means that an element-wise operation, like `c[i] = a[i] + b[i]` in C,
  116. would be written more concisely without indexing, like `c = a + b`.
  117. @@ -153,7 +151,7 @@ using the `af::` namespace.
  118. # Indexing {#getting_started_indexing}
  119. Like all functions in ArrayFire, indexing is also executed in parallel on
  120. -the OpenCL/CUDA device.
  121. +the OpenCL device.
  122. Because of this, indexing becomes part of a JIT operation and is accomplished
  123. using parentheses instead of square brackets (i.e. as `A(0)` instead of `A[0]`).
  124. To index `af::array`s you may use one or a combination of the following functions:
  125. @@ -177,9 +175,9 @@ The `host` function *copies* the data from the device and makes it available
  126. in a C-style array on the host. As such, it is up to the developer to manage
  127. any memory returned by `host`.
  128. The `device` function returns a pointer/reference to device memory for
  129. -interoperability with external CUDA/OpenCL kernels. As this memory belongs to
  130. +interoperability with external OpenCL kernels. As this memory belongs to
  131. ArrayFire, the programmer should not attempt to free/deallocate the pointer.
  132. -For example, here is how we can interact with both OpenCL and CUDA:
  133. +For example, here is how we can interact with OpenCL:
  134. \snippet test/getting_started.cpp ex_getting_started_ptr
  135. @@ -249,8 +247,7 @@ simply include the `arrayfire.h` header file and start coding!
  136. Now that you have a general introduction to ArrayFire, where do you go from
  137. here? In particular you might find these documents useful
  138. -* [Building an ArrayFire program on Linux](\ref using_on_linux)
  139. -* [Building an Arrayfire program on Windows](\ref using_on_windows)
  140. +* [Building an ArrayFire program on GNU/Linux](\ref using_on_linux)
  141. * [Timing ArrayFire code](\ref timing)
  142. diff --git a/docs/pages/release_notes.md b/docs/pages/release_notes.md
  143. index bdcd911..854be12 100644
  144. --- a/docs/pages/release_notes.md
  145. +++ b/docs/pages/release_notes.md
  146. @@ -672,7 +672,6 @@ v3.1.1
  147. Installers
  148. -----------
  149. -* CUDA backend now depends on CUDA 7.5 toolkit
  150. * OpenCL backend now require OpenCL 1.2 or greater
  151. Bug Fixes
  152. @@ -752,10 +751,6 @@ Function Additions
  153. * \ref saveArray() and \ref readArray() - Stream arrays to binary files
  154. * \ref toString() - toString function returns the array and data as a string
  155. -* CUDA specific functionality
  156. - * \ref getStream() - Returns default CUDA stream ArrayFire uses for the current device
  157. - * \ref getNativeId() - Returns native id of the CUDA device
  158. -
  159. Improvements
  160. ------------
  161. * dot
  162. @@ -779,11 +774,6 @@ Improvements
  163. * CPU Backend
  164. * Device properties for CPU
  165. * Improved performance when all buffers are indexed linearly
  166. -* CUDA Backend
  167. - * Use streams in CUDA (no longer using default stream)
  168. - * Using async cudaMem ops
  169. - * Add 64-bit integer support for JIT functions
  170. - * Performance improvements for CUDA JIT for non-linear 3D and 4D arrays
  171. * OpenCL Backend
  172. * Improve compilation times for OpenCL backend
  173. * Performance improvements for non-linear JIT kernels on OpenCL
  174. @@ -817,7 +807,7 @@ New Examples
  175. Installer
  176. ----------
  177. * Fixed bug in automatic detection of ArrayFire when using with CMake in Windows
  178. -* The Linux libraries are now compiled with static version of FreeImage
  179. +* The GNU/Linux libraries are now compiled with static version of FreeImage
  180. Known Issues
  181. ------------
  182. diff --git a/docs/pages/using_on_linux.md b/docs/pages/using_on_linux.md
  183. index 493080f..b86c326 100644
  184. --- a/docs/pages/using_on_linux.md
  185. +++ b/docs/pages/using_on_linux.md
  186. @@ -1,42 +1,35 @@
  187. -Using ArrayFire on Linux {#using_on_linux}
  188. +Using ArrayFire on GNU/Linux {#using_on_linux}
  189. =====
  190. Once you have [installed](\ref installing) ArrayFire on your system, the next thing to do is
  191. -set up your build system. On Linux, you can create ArrayFire projects using
  192. +set up your build system. On GNU/Linux, you can create ArrayFire projects using
  193. almost any editor, compiler, or build system. The only requirements are
  194. that you include the ArrayFire header directories and link with the ArrayFire
  195. library you intend to use.
  196. ## The big picture
  197. -On Linux, we suggest you install ArrayFire to the `/usr/local` directory
  198. +On GNU/Linux, we suggest you install ArrayFire to the `/usr/local` directory
  199. so that all of the include files and libraries are part of your standard path.
  200. The installer will populate files in the following sub-directories:
  201. include/arrayfire.h - Primary ArrayFire include file
  202. include/af/*.h - Additional include files
  203. - lib/libaf* - CPU, CUDA, and OpenCL libraries (.a, .so)
  204. + lib/libaf* - CPU and OpenCL libraries (.a, .so)
  205. lib/libforge* - Visualization library
  206. share/ArrayFire/cmake/* - CMake config (find) scripts
  207. share/ArrayFire/examples/* - All ArrayFire examples
  208. Because ArrayFire follows standard installation practices, you can use basically
  209. any build system to create and compile projects that use ArrayFire.
  210. -Among the many possible build systems on Linux we suggest using ArrayFire with
  211. +Among the many possible build systems on GNU/Linux we suggest using ArrayFire with
  212. either CMake or Makefiles with CMake being our preferred build system.
  213. ## Prerequisite software
  214. To build ArrayFire projects you will need a compiler
  215. -#### Fedora, Centos and Redhat
  216. -
  217. -Install EPEL repo (not required for Fedora)
  218. -
  219. -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  220. -yum install epel-release
  221. -yum update
  222. -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  223. +#### BLAG Linux and GNU
  224. Install build dependencies
  225. @@ -44,7 +37,7 @@ Install build dependencies
  226. yum install gcc gcc-c++ cmake make
  227. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  228. -#### Debian and Ubuntu
  229. +#### GnewSense and Trisquel
  230. Install common dependencies
  231. @@ -64,7 +57,7 @@ ArrayFire to an existing project.
  232. As [discussed above](#big-picture), ArrayFire ships with a series of CMake
  233. scripts to make finding and using our library easy.
  234. The scripts will automatically find all versions of the ArrayFire library
  235. -and pick the most powerful of the installed backends (typically CUDA).
  236. +and pick the most powerful of the installed backends.
  237. First create a file called `CMakeLists.txt` in your project directory:
  238. @@ -82,22 +75,13 @@ and populate it with the following code:
  239. FIND_PACKAGE(OpenCL)
  240. SET(EXTRA_LIBS ${CMAKE_THREAD_LIBS_INIT} ${OpenCL_LIBRARIES})
  241. - # Or if you intend to use CUDA, you need it as well as NVVM:
  242. - FIND_PACKAGE(CUDA)
  243. - FIND_PACKAGE(NVVM) # this FIND script can be found in the ArrayFire CMake example repository
  244. - SET(EXTRA_LIBS ${CMAKE_THREAD_LIBS_INIT} ${CUDA_LIBRARIES} ${NVVM_LIB})
  245. -
  246. ADD_EXECUTABLE(my_executable [list your source files here])
  247. TARGET_LINK_LIBRARIES(my_executable ${ArrayFire_LIBRARIES} ${EXTRA_LIBS})
  248. where `my_executable` is the name of the executable you wish to create.
  249. See the [CMake documentation](https://cmake.org/documentation/) for more
  250. information on how to use CMake.
  251. -Clearly the above code snippet precludes the use of both CUDA and OpenCL, see
  252. -the
  253. -[ArrayFire CMake Example](https://github.com/arrayfire/arrayfire-project-templates/tree/master/CMake);
  254. -for an example of how to build executables for both backends from the same
  255. -CMake script.
  256. +Clearly the above code snippet precludes the use of OpenCL.
  257. In the above code listing, the `FIND_PACKAGE` will find the ArrayFire include
  258. files, libraries, and define several variables including:
  259. @@ -112,8 +96,6 @@ If you wish to use a specific backend, the find script also defines these variab
  260. ArrayFire_CPU_FOUND - True of the ArrayFire CPU library has been found.
  261. ArrayFire_CPU_LIBRARIES - Location of ArrayFire's CPU library, if found
  262. - ArrayFire_CUDA_FOUND - True of the ArrayFire CUDA library has been found.
  263. - ArrayFire_CUDA_LIBRARIES - Location of ArrayFire's CUDA library, if found
  264. ArrayFire_OpenCL_FOUND - True of the ArrayFire OpenCL library has been found.
  265. ArrayFire_OpenCL_LIBRARIES - Location of ArrayFire's OpenCL library, if found
  266. ArrayFire_Unified_FOUND - True of the ArrayFire Unified library has been found.
  267. @@ -121,13 +103,8 @@ If you wish to use a specific backend, the find script also defines these variab
  268. Therefore, if you wish to target a specific specific backend, simply replace
  269. `${ArrayFire_LIBRARIES}` with `${ArrayFire_CPU}`, `${ArrayFire_OPENCL}`,
  270. -`${ArrayFire_CUDA}`, or `${ArrayFire_Unified}` in the `TARGET_LINK_LIBRARIES`
  271. +or `${ArrayFire_Unified}` in the `TARGET_LINK_LIBRARIES`
  272. command above.
  273. -If you intend on building your software to link with all of these backends,
  274. -please see the
  275. -[CMake Project Example](https://github.com/arrayfire/arrayfire-project-templates)
  276. -which makes use of some fairly fun CMake tricks to avoid re-compiling code
  277. -whenever possible.
  278. Next we need to instruct CMake to create build instructions and then compile.
  279. We suggest using CMake's out-of-source build functionality to keep your build
  280. @@ -161,8 +138,7 @@ instructions.
  281. Similarly, you will need to specify the path to the ArrayFire library using
  282. the `-L` option (e.g. `-L/usr/local/lib`) followed by the specific ArrayFire
  283. library you wish to use using the `-l` option (for example `-lafcpu`,
  284. -`-lafopencl`, `-lafcuda`, or `-laf` for the CPU, OpenCL, CUDA, and unified
  285. -backends respectively.
  286. +`-lafopencl`, or `-laf` for the CPU, OpenCL, and unified backends respectively.
  287. Here is a minimial example MakeFile which uses ArrayFire's CPU backend: