123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346 |
- diff --git a/docs/layout.xml b/docs/layout.xml
- index d2f18bc..292e7d8 100644
- --- a/docs/layout.xml
- +++ b/docs/layout.xml
- @@ -5,8 +5,6 @@
- <tab type="usergroup" visible="yes" title="Tutorials">
- <tab type="user" url="\ref installing" visible="yes" title="Installation"/>
- <tab type="user" url="\ref using_on_linux" visible="yes" title="Using on Linux"/>
- - <tab type="user" url="\ref using_on_windows" visible="yes" title="Using on Windows"/>
- - <tab type="user" url="\ref using_on_osx" visible="yes" title="Using on OSX"/>
- <tab type="user" url="\ref gettingstarted" visible="yes" title="Getting Started"/>
- <tab type="user" url="\ref vectorization" visible="yes" title="Introduction to Vectorization"/>
- <tab type="user" url="\ref matrixmanipulation" visible="yes" title="Array and Matrix Manipulation"/>
- diff --git a/docs/pages/README.md b/docs/pages/README.md
- index 8a395a7..46011ca 100644
- --- a/docs/pages/README.md
- +++ b/docs/pages/README.md
- @@ -9,10 +9,8 @@ ArrayFire is a high performance software library for parallel computing with an
-
- ## Installing ArrayFire
-
- -You can install ArrayFire using either a binary installer for Windows, OSX,
- -or Linux or download it from source:
- +You can install ArrayFire using Parabola or download it from source:
-
- -* [Binary installers for Windows, OSX, and Linux](\ref installing)
- * [Build from source](https://github.com/arrayfire/arrayfire)
-
- ## Easy to use
- @@ -24,7 +22,7 @@ readable math-resembling notation. You _do not_ need expertise in
- parallel programming to use ArrayFire.
-
- A few lines of ArrayFire code
- -accomplishes what can take 100s of complicated lines in CUDA or OpenCL
- +accomplishes what can take 100s of complicated lines in OpenCL
- kernels.
-
- ## ArrayFire is extensive!
- @@ -56,25 +54,23 @@ unsigned integers.
- #### Extending ArrayFire
-
- ArrayFire can be used as a stand-alone application or integrated with
- -existing CUDA or OpenCL code. All ArrayFire `arrays` can be
- -interchanged with other CUDA or OpenCL data structures.
- +existing OpenCL code. All ArrayFire `arrays` can be
- +interchanged with other OpenCL data structure.
-
- ## Code once, run anywhere!
-
- -With support for x86, ARM, CUDA, and OpenCL devices, ArrayFire supports for a comprehensive list of devices.
- +With support for x86, ARM, and OpenCL devices, ArrayFire supports for a comprehensive list of devices.
-
- Each ArrayFire installation comes with:
- - - a CUDA version (named 'libafcuda') for [NVIDIA
- - GPUs](https://developer.nvidia.com/cuda-gpus),
- - an OpenCL version (named 'libafopencl') for [OpenCL devices](http://www.khronos.org/conformance/adopters/conformant-products#opencl)
- - - a CPU version (named 'libafcpu') to fall back to when CUDA or OpenCL devices are not available.
- + - a CPU version (named 'libafcpu') to fall back to when OpenCL devices are not available.
-
- ## ArrayFire is highly efficient
-
- #### Vectorized and Batched Operations
-
- ArrayFire supports batched operations on N-dimensional arrays.
- -Batch operations in ArrayFire are run in parallel ensuring an optimal usage of your CUDA or OpenCL device.
- +Batch operations in ArrayFire are run in parallel ensuring an optimal usage of your OpenCL device.
-
- You can get the best performance out of ArrayFire using [vectorization techniques](\ref vectorization).
-
- @@ -93,7 +89,7 @@ Read more about how [ArrayFire JIT](http://arrayfire.com/performance-of-arrayfir
- ## Simple Example
-
- Here's a live example to let you see ArrayFire code. You create [arrays](\ref construct_mat)
- -which reside on CUDA or OpenCL devices. Then you can use
- +which reside on OpenCL devices. Then you can use
- [ArrayFire functions](modules.htm) on those [arrays](\ref construct_mat).
-
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.cpp}
- @@ -144,7 +140,7 @@ Formatted:
- BibTeX:
-
- @misc{Yalamanchili2015,
- - abstract = {ArrayFire is a high performance software library for parallel computing with an easy-to-use API. Its array based function set makes parallel programming simple. ArrayFire's multiple backends (CUDA, OpenCL and native CPU) make it platform independent and highly portable. A few lines of code in ArrayFire can replace dozens of lines of parallel computing code, saving you valuable time and lowering development costs.},
- + abstract = {ArrayFire is a high performance software library for parallel computing with an easy-to-use API. Its array based function set makes parallel programming simple. ArrayFire's multiple backends (OpenCL and native CPU) make it platform independent and highly portable. A few lines of code in ArrayFire can replace dozens of lines of parallel computing code, saving you valuable time and lowering development costs.},
- address = {Atlanta},
- author = {Yalamanchili, Pavan and Arshad, Umar and Mohammed, Zakiuddin and Garigipati, Pradeep and Entschev, Peter and Kloppenborg, Brian and Malcolm, James and Melonakos, John},
- publisher = {AccelerEyes},
- diff --git a/docs/pages/configuring_arrayfire_environment.md b/docs/pages/configuring_arrayfire_environment.md
- index 33c5a39..36b52b7 100644
- --- a/docs/pages/configuring_arrayfire_environment.md
- +++ b/docs/pages/configuring_arrayfire_environment.md
- @@ -28,19 +28,6 @@ detailed. This helps in locating the exact failure.
- AF_PRINT_ERRORS=1 ./myprogram
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- -AF_CUDA_DEFAULT_DEVICE {#af_cuda_default_device}
- --------------------------------------------------------------------------------
- -
- -Use this variable to set the default CUDA device. Valid values for this
- -variable are the device identifiers shown when af::info is run.
- -
- -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- -AF_CUDA_DEFAULT_DEVICE=1 ./myprogram_cuda
- -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- -
- -Note: af::setDevice call in the source code will take precedence over this
- -variable.
- -
- AF_OPENCL_DEFAULT_DEVICE {#af_opencl_default_device}
- -------------------------------------------------------------------------------
-
- diff --git a/docs/pages/getting_started.md b/docs/pages/getting_started.md
- index 7ea9a75..21a3dfe 100644
- --- a/docs/pages/getting_started.md
- +++ b/docs/pages/getting_started.md
- @@ -70,14 +70,12 @@ For example:
- \snippet test/getting_started.cpp ex_getting_started_init
-
- ArrayFire also supports array initialization from memory already on the GPU.
- -For example, with CUDA one can populate an `array` directly using a call
- -to `cudaMemcpy`:
- +For example:
-
- \snippet test/getting_started.cpp ex_getting_started_dev_ptr
-
- Similar functionality exists for OpenCL too. If you wish to intermingle
- -ArrayFire with CUDA or OpenCL code, we suggest you consult the
- -[CUDA interoperability](\ref interop_cuda) or
- +ArrayFire with OpenCL code, we suggest you consult the
- [OpenCL interoperability](\ref interop_opencl) pages for detailed instructions.
-
- # ArrayFire array contents, dimensions, and properties {#getting_started_array_properties}
- @@ -111,7 +109,7 @@ full documentation on the [array](\ref af::array).
- # Writing mathematical expressions in ArrayFire {#getting_started_writing_math}
-
- ArrayFire features an intelligent Just-In-Time (JIT) compilation engine that
- -converts expressions using arrays into the smallest number of CUDA/OpenCL
- +converts expressions using arrays into the smallest number of OpenCL
- kernels. For most operations on arrays, ArrayFire functions like a vector library.
- That means that an element-wise operation, like `c[i] = a[i] + b[i]` in C,
- would be written more concisely without indexing, like `c = a + b`.
- @@ -153,7 +151,7 @@ using the `af::` namespace.
- # Indexing {#getting_started_indexing}
-
- Like all functions in ArrayFire, indexing is also executed in parallel on
- -the OpenCL/CUDA device.
- +the OpenCL device.
- Because of this, indexing becomes part of a JIT operation and is accomplished
- using parentheses instead of square brackets (i.e. as `A(0)` instead of `A[0]`).
- To index `af::array`s you may use one or a combination of the following functions:
- @@ -177,9 +175,9 @@ The `host` function *copies* the data from the device and makes it available
- in a C-style array on the host. As such, it is up to the developer to manage
- any memory returned by `host`.
- The `device` function returns a pointer/reference to device memory for
- -interoperability with external CUDA/OpenCL kernels. As this memory belongs to
- +interoperability with external OpenCL kernels. As this memory belongs to
- ArrayFire, the programmer should not attempt to free/deallocate the pointer.
- -For example, here is how we can interact with both OpenCL and CUDA:
- +For example, here is how we can interact with OpenCL:
-
- \snippet test/getting_started.cpp ex_getting_started_ptr
-
- @@ -249,8 +247,7 @@ simply include the `arrayfire.h` header file and start coding!
- Now that you have a general introduction to ArrayFire, where do you go from
- here? In particular you might find these documents useful
-
- -* [Building an ArrayFire program on Linux](\ref using_on_linux)
- -* [Building an Arrayfire program on Windows](\ref using_on_windows)
- +* [Building an ArrayFire program on GNU/Linux](\ref using_on_linux)
- * [Timing ArrayFire code](\ref timing)
-
-
- diff --git a/docs/pages/release_notes.md b/docs/pages/release_notes.md
- index bdcd911..854be12 100644
- --- a/docs/pages/release_notes.md
- +++ b/docs/pages/release_notes.md
- @@ -672,7 +672,6 @@ v3.1.1
- Installers
- -----------
-
- -* CUDA backend now depends on CUDA 7.5 toolkit
- * OpenCL backend now require OpenCL 1.2 or greater
-
- Bug Fixes
- @@ -752,10 +751,6 @@ Function Additions
- * \ref saveArray() and \ref readArray() - Stream arrays to binary files
- * \ref toString() - toString function returns the array and data as a string
-
- -* CUDA specific functionality
- - * \ref getStream() - Returns default CUDA stream ArrayFire uses for the current device
- - * \ref getNativeId() - Returns native id of the CUDA device
- -
- Improvements
- ------------
- * dot
- @@ -779,11 +774,6 @@ Improvements
- * CPU Backend
- * Device properties for CPU
- * Improved performance when all buffers are indexed linearly
- -* CUDA Backend
- - * Use streams in CUDA (no longer using default stream)
- - * Using async cudaMem ops
- - * Add 64-bit integer support for JIT functions
- - * Performance improvements for CUDA JIT for non-linear 3D and 4D arrays
- * OpenCL Backend
- * Improve compilation times for OpenCL backend
- * Performance improvements for non-linear JIT kernels on OpenCL
- @@ -817,7 +807,7 @@ New Examples
- Installer
- ----------
- * Fixed bug in automatic detection of ArrayFire when using with CMake in Windows
- -* The Linux libraries are now compiled with static version of FreeImage
- +* The GNU/Linux libraries are now compiled with static version of FreeImage
-
- Known Issues
- ------------
- diff --git a/docs/pages/using_on_linux.md b/docs/pages/using_on_linux.md
- index 493080f..b86c326 100644
- --- a/docs/pages/using_on_linux.md
- +++ b/docs/pages/using_on_linux.md
- @@ -1,42 +1,35 @@
- -Using ArrayFire on Linux {#using_on_linux}
- +Using ArrayFire on GNU/Linux {#using_on_linux}
- =====
-
- Once you have [installed](\ref installing) ArrayFire on your system, the next thing to do is
- -set up your build system. On Linux, you can create ArrayFire projects using
- +set up your build system. On GNU/Linux, you can create ArrayFire projects using
- almost any editor, compiler, or build system. The only requirements are
- that you include the ArrayFire header directories and link with the ArrayFire
- library you intend to use.
-
- ## The big picture
-
- -On Linux, we suggest you install ArrayFire to the `/usr/local` directory
- +On GNU/Linux, we suggest you install ArrayFire to the `/usr/local` directory
- so that all of the include files and libraries are part of your standard path.
- The installer will populate files in the following sub-directories:
-
- include/arrayfire.h - Primary ArrayFire include file
- include/af/*.h - Additional include files
- - lib/libaf* - CPU, CUDA, and OpenCL libraries (.a, .so)
- + lib/libaf* - CPU and OpenCL libraries (.a, .so)
- lib/libforge* - Visualization library
- share/ArrayFire/cmake/* - CMake config (find) scripts
- share/ArrayFire/examples/* - All ArrayFire examples
-
- Because ArrayFire follows standard installation practices, you can use basically
- any build system to create and compile projects that use ArrayFire.
- -Among the many possible build systems on Linux we suggest using ArrayFire with
- +Among the many possible build systems on GNU/Linux we suggest using ArrayFire with
- either CMake or Makefiles with CMake being our preferred build system.
-
- ## Prerequisite software
-
- To build ArrayFire projects you will need a compiler
-
- -#### Fedora, Centos and Redhat
- -
- -Install EPEL repo (not required for Fedora)
- -
- -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- -yum install epel-release
- -yum update
- -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- +#### BLAG Linux and GNU
-
- Install build dependencies
-
- @@ -44,7 +37,7 @@ Install build dependencies
- yum install gcc gcc-c++ cmake make
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- -#### Debian and Ubuntu
- +#### GnewSense and Trisquel
-
- Install common dependencies
-
- @@ -64,7 +57,7 @@ ArrayFire to an existing project.
- As [discussed above](#big-picture), ArrayFire ships with a series of CMake
- scripts to make finding and using our library easy.
- The scripts will automatically find all versions of the ArrayFire library
- -and pick the most powerful of the installed backends (typically CUDA).
- +and pick the most powerful of the installed backends.
-
- First create a file called `CMakeLists.txt` in your project directory:
-
- @@ -82,22 +75,13 @@ and populate it with the following code:
- FIND_PACKAGE(OpenCL)
- SET(EXTRA_LIBS ${CMAKE_THREAD_LIBS_INIT} ${OpenCL_LIBRARIES})
-
- - # Or if you intend to use CUDA, you need it as well as NVVM:
- - FIND_PACKAGE(CUDA)
- - FIND_PACKAGE(NVVM) # this FIND script can be found in the ArrayFire CMake example repository
- - SET(EXTRA_LIBS ${CMAKE_THREAD_LIBS_INIT} ${CUDA_LIBRARIES} ${NVVM_LIB})
- -
- ADD_EXECUTABLE(my_executable [list your source files here])
- TARGET_LINK_LIBRARIES(my_executable ${ArrayFire_LIBRARIES} ${EXTRA_LIBS})
-
- where `my_executable` is the name of the executable you wish to create.
- See the [CMake documentation](https://cmake.org/documentation/) for more
- information on how to use CMake.
- -Clearly the above code snippet precludes the use of both CUDA and OpenCL, see
- -the
- -[ArrayFire CMake Example](https://github.com/arrayfire/arrayfire-project-templates/tree/master/CMake);
- -for an example of how to build executables for both backends from the same
- -CMake script.
- +Clearly the above code snippet precludes the use of OpenCL.
-
- In the above code listing, the `FIND_PACKAGE` will find the ArrayFire include
- files, libraries, and define several variables including:
- @@ -112,8 +96,6 @@ If you wish to use a specific backend, the find script also defines these variab
-
- ArrayFire_CPU_FOUND - True of the ArrayFire CPU library has been found.
- ArrayFire_CPU_LIBRARIES - Location of ArrayFire's CPU library, if found
- - ArrayFire_CUDA_FOUND - True of the ArrayFire CUDA library has been found.
- - ArrayFire_CUDA_LIBRARIES - Location of ArrayFire's CUDA library, if found
- ArrayFire_OpenCL_FOUND - True of the ArrayFire OpenCL library has been found.
- ArrayFire_OpenCL_LIBRARIES - Location of ArrayFire's OpenCL library, if found
- ArrayFire_Unified_FOUND - True of the ArrayFire Unified library has been found.
- @@ -121,13 +103,8 @@ If you wish to use a specific backend, the find script also defines these variab
-
- Therefore, if you wish to target a specific specific backend, simply replace
- `${ArrayFire_LIBRARIES}` with `${ArrayFire_CPU}`, `${ArrayFire_OPENCL}`,
- -`${ArrayFire_CUDA}`, or `${ArrayFire_Unified}` in the `TARGET_LINK_LIBRARIES`
- +or `${ArrayFire_Unified}` in the `TARGET_LINK_LIBRARIES`
- command above.
- -If you intend on building your software to link with all of these backends,
- -please see the
- -[CMake Project Example](https://github.com/arrayfire/arrayfire-project-templates)
- -which makes use of some fairly fun CMake tricks to avoid re-compiling code
- -whenever possible.
-
- Next we need to instruct CMake to create build instructions and then compile.
- We suggest using CMake's out-of-source build functionality to keep your build
- @@ -161,8 +138,7 @@ instructions.
- Similarly, you will need to specify the path to the ArrayFire library using
- the `-L` option (e.g. `-L/usr/local/lib`) followed by the specific ArrayFire
- library you wish to use using the `-l` option (for example `-lafcpu`,
- -`-lafopencl`, `-lafcuda`, or `-laf` for the CPU, OpenCL, CUDA, and unified
- -backends respectively.
- +`-lafopencl`, or `-laf` for the CPU, OpenCL, and unified backends respectively.
-
- Here is a minimial example MakeFile which uses ArrayFire's CPU backend:
-
|