tegra.rst 7.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179
  1. ===============================================
  2. drm/tegra NVIDIA Tegra GPU and display driver
  3. ===============================================
  4. NVIDIA Tegra SoCs support a set of display, graphics and video functions via
  5. the host1x controller. host1x supplies command streams, gathered from a push
  6. buffer provided directly by the CPU, to its clients via channels. Software,
  7. or blocks amongst themselves, can use syncpoints for synchronization.
  8. Up until, but not including, Tegra124 (aka Tegra K1) the drm/tegra driver
  9. supports the built-in GPU, comprised of the gr2d and gr3d engines. Starting
  10. with Tegra124 the GPU is based on the NVIDIA desktop GPU architecture and
  11. supported by the drm/nouveau driver.
  12. The drm/tegra driver supports NVIDIA Tegra SoC generations since Tegra20. It
  13. has three parts:
  14. - A host1x driver that provides infrastructure and access to the host1x
  15. services.
  16. - A KMS driver that supports the display controllers as well as a number of
  17. outputs, such as RGB, HDMI, DSI, and DisplayPort.
  18. - A set of custom userspace IOCTLs that can be used to submit jobs to the
  19. GPU and video engines via host1x.
  20. Driver Infrastructure
  21. =====================
  22. The various host1x clients need to be bound together into a logical device in
  23. order to expose their functionality to users. The infrastructure that supports
  24. this is implemented in the host1x driver. When a driver is registered with the
  25. infrastructure it provides a list of compatible strings specifying the devices
  26. that it needs. The infrastructure creates a logical device and scan the device
  27. tree for matching device nodes, adding the required clients to a list. Drivers
  28. for individual clients register with the infrastructure as well and are added
  29. to the logical host1x device.
  30. Once all clients are available, the infrastructure will initialize the logical
  31. device using a driver-provided function which will set up the bits specific to
  32. the subsystem and in turn initialize each of its clients.
  33. Similarly, when one of the clients is unregistered, the infrastructure will
  34. destroy the logical device by calling back into the driver, which ensures that
  35. the subsystem specific bits are torn down and the clients destroyed in turn.
  36. Host1x Infrastructure Reference
  37. -------------------------------
  38. .. kernel-doc:: include/linux/host1x.h
  39. .. kernel-doc:: drivers/gpu/host1x/bus.c
  40. :export:
  41. Host1x Syncpoint Reference
  42. --------------------------
  43. .. kernel-doc:: drivers/gpu/host1x/syncpt.c
  44. :export:
  45. KMS driver
  46. ==========
  47. The display hardware has remained mostly backwards compatible over the various
  48. Tegra SoC generations, up until Tegra186 which introduces several changes that
  49. make it difficult to support with a parameterized driver.
  50. Display Controllers
  51. -------------------
  52. Tegra SoCs have two display controllers, each of which can be associated with
  53. zero or more outputs. Outputs can also share a single display controller, but
  54. only if they run with compatible display timings. Two display controllers can
  55. also share a single framebuffer, allowing cloned configurations even if modes
  56. on two outputs don't match. A display controller is modelled as a CRTC in KMS
  57. terms.
  58. On Tegra186, the number of display controllers has been increased to three. A
  59. display controller can no longer drive all of the outputs. While two of these
  60. controllers can drive both DSI outputs and both SOR outputs, the third cannot
  61. drive any DSI.
  62. Windows
  63. ~~~~~~~
  64. A display controller controls a set of windows that can be used to composite
  65. multiple buffers onto the screen. While it is possible to assign arbitrary Z
  66. ordering to individual windows (by programming the corresponding blending
  67. registers), this is currently not supported by the driver. Instead, it will
  68. assume a fixed Z ordering of the windows (window A is the root window, that
  69. is, the lowest, while windows B and C are overlaid on top of window A). The
  70. overlay windows support multiple pixel formats and can automatically convert
  71. from YUV to RGB at scanout time. This makes them useful for displaying video
  72. content. In KMS, each window is modelled as a plane. Each display controller
  73. has a hardware cursor that is exposed as a cursor plane.
  74. Outputs
  75. -------
  76. The type and number of supported outputs varies between Tegra SoC generations.
  77. All generations support at least HDMI. While earlier generations supported the
  78. very simple RGB interfaces (one per display controller), recent generations no
  79. longer do and instead provide standard interfaces such as DSI and eDP/DP.
  80. Outputs are modelled as a composite encoder/connector pair.
  81. RGB/LVDS
  82. ~~~~~~~~
  83. This interface is no longer available since Tegra124. It has been replaced by
  84. the more standard DSI and eDP interfaces.
  85. HDMI
  86. ~~~~
  87. HDMI is supported on all Tegra SoCs. Starting with Tegra210, HDMI is provided
  88. by the versatile SOR output, which supports eDP, DP and HDMI. The SOR is able
  89. to support HDMI 2.0, though support for this is currently not merged.
  90. DSI
  91. ~~~
  92. Although Tegra has supported DSI since Tegra30, the controller has changed in
  93. several ways in Tegra114. Since none of the publicly available development
  94. boards prior to Dalmore (Tegra114) have made use of DSI, only Tegra114 and
  95. later are supported by the drm/tegra driver.
  96. eDP/DP
  97. ~~~~~~
  98. eDP was first introduced in Tegra124 where it was used to drive the display
  99. panel for notebook form factors. Tegra210 added support for full DisplayPort
  100. support, though this is currently not implemented in the drm/tegra driver.
  101. Userspace Interface
  102. ===================
  103. The userspace interface provided by drm/tegra allows applications to create
  104. GEM buffers, access and control syncpoints as well as submit command streams
  105. to host1x.
  106. GEM Buffers
  107. -----------
  108. The ``DRM_IOCTL_TEGRA_GEM_CREATE`` IOCTL is used to create a GEM buffer object
  109. with Tegra-specific flags. This is useful for buffers that should be tiled, or
  110. that are to be scanned out upside down (useful for 3D content).
  111. After a GEM buffer object has been created, its memory can be mapped by an
  112. application using the mmap offset returned by the ``DRM_IOCTL_TEGRA_GEM_MMAP``
  113. IOCTL.
  114. Syncpoints
  115. ----------
  116. The current value of a syncpoint can be obtained by executing the
  117. ``DRM_IOCTL_TEGRA_SYNCPT_READ`` IOCTL. Incrementing the syncpoint is achieved
  118. using the ``DRM_IOCTL_TEGRA_SYNCPT_INCR`` IOCTL.
  119. Userspace can also request blocking on a syncpoint. To do so, it needs to
  120. execute the ``DRM_IOCTL_TEGRA_SYNCPT_WAIT`` IOCTL, specifying the value of
  121. the syncpoint to wait for. The kernel will release the application when the
  122. syncpoint reaches that value or after a specified timeout.
  123. Command Stream Submission
  124. -------------------------
  125. Before an application can submit command streams to host1x it needs to open a
  126. channel to an engine using the ``DRM_IOCTL_TEGRA_OPEN_CHANNEL`` IOCTL. Client
  127. IDs are used to identify the target of the channel. When a channel is no
  128. longer needed, it can be closed using the ``DRM_IOCTL_TEGRA_CLOSE_CHANNEL``
  129. IOCTL. To retrieve the syncpoint associated with a channel, an application
  130. can use the ``DRM_IOCTL_TEGRA_GET_SYNCPT``.
  131. After opening a channel, submitting command streams is easy. The application
  132. writes commands into the memory backing a GEM buffer object and passes these
  133. to the ``DRM_IOCTL_TEGRA_SUBMIT`` IOCTL along with various other parameters,
  134. such as the syncpoints or relocations used in the job submission.