nvidia-tegra.rst 6.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148
  1. NVIDIA Tegra
  2. ============
  3. - .. rubric:: T194
  4. :name: t194
  5. T194 has eight NVIDIA Carmel CPU cores in a coherent multi-processor
  6. configuration. The Carmel cores support the ARM Architecture version 8.2,
  7. executing both 64-bit AArch64 code, and 32-bit AArch32 code. The Carmel
  8. processors are organized as four dual-core clusters, where each cluster has
  9. a dedicated 2 MiB Level-2 unified cache. A high speed coherency fabric connects
  10. these processor complexes and allows heterogeneous multi-processing with all
  11. eight cores if required.
  12. - .. rubric:: T186
  13. :name: t186
  14. The NVIDIA® Parker (T186) series system-on-chip (SoC) delivers a heterogeneous
  15. multi-processing (HMP) solution designed to optimize performance and
  16. efficiency.
  17. T186 has Dual NVIDIA Denver2 ARM® CPU cores, plus Quad ARM Cortex®-A57 cores,
  18. in a coherent multiprocessor configuration. The Denver 2 and Cortex-A57 cores
  19. support ARMv8, executing both 64-bit Aarch64 code, and 32-bit Aarch32 code
  20. including legacy ARMv7 applications. The Denver 2 processors each have 128 KB
  21. Instruction and 64 KB Data Level 1 caches; and have a 2MB shared Level 2
  22. unified cache. The Cortex-A57 processors each have 48 KB Instruction and 32 KB
  23. Data Level 1 caches; and also have a 2 MB shared Level 2 unified cache. A
  24. high speed coherency fabric connects these two processor complexes and allows
  25. heterogeneous multi-processing with all six cores if required.
  26. Denver is NVIDIA's own custom-designed, 64-bit, dual-core CPU which is
  27. fully Armv8-A architecture compatible. Each of the two Denver cores
  28. implements a 7-way superscalar microarchitecture (up to 7 concurrent
  29. micro-ops can be executed per clock), and includes a 128KB 4-way L1
  30. instruction cache, a 64KB 4-way L1 data cache, and a 2MB 16-way L2
  31. cache, which services both cores.
  32. Denver implements an innovative process called Dynamic Code Optimization,
  33. which optimizes frequently used software routines at runtime into dense,
  34. highly tuned microcode-equivalent routines. These are stored in a
  35. dedicated, 128MB main-memory-based optimization cache. After being read
  36. into the instruction cache, the optimized micro-ops are executed,
  37. re-fetched and executed from the instruction cache as long as needed and
  38. capacity allows.
  39. Effectively, this reduces the need to re-optimize the software routines.
  40. Instead of using hardware to extract the instruction-level parallelism
  41. (ILP) inherent in the code, Denver extracts the ILP once via software
  42. techniques, and then executes those routines repeatedly, thus amortizing
  43. the cost of ILP extraction over the many execution instances.
  44. Denver also features new low latency power-state transitions, in addition
  45. to extensive power-gating and dynamic voltage and clock scaling based on
  46. workloads.
  47. - .. rubric:: T210
  48. :name: t210
  49. T210 has Quad Arm® Cortex®-A57 cores in a switched configuration with a
  50. companion set of quad Arm Cortex-A53 cores. The Cortex-A57 and A53 cores
  51. support Armv8-A, executing both 64-bit Aarch64 code, and 32-bit Aarch32 code
  52. including legacy Armv7-A applications. The Cortex-A57 processors each have
  53. 48 KB Instruction and 32 KB Data Level 1 caches; and have a 2 MB shared
  54. Level 2 unified cache. The Cortex-A53 processors each have 32 KB Instruction
  55. and 32 KB Data Level 1 caches; and have a 512 KB shared Level 2 unified cache.
  56. Directory structure
  57. -------------------
  58. - plat/nvidia/tegra/common - Common code for all Tegra SoCs
  59. - plat/nvidia/tegra/soc/txxx - Chip specific code
  60. Trusted OS dispatcher
  61. ---------------------
  62. Tegra supports multiple Trusted OS'.
  63. - Trusted Little Kernel (TLK): In order to include the 'tlkd' dispatcher in
  64. the image, pass 'SPD=tlkd' on the command line while preparing a bl31 image.
  65. - Trusty: In order to include the 'trusty' dispatcher in the image, pass
  66. 'SPD=trusty' on the command line while preparing a bl31 image.
  67. This allows other Trusted OS vendors to use the upstream code and include
  68. their dispatchers in the image without changing any makefiles.
  69. These are the supported Trusted OS' by Tegra platforms.
  70. - Tegra210: TLK and Trusty
  71. - Tegra186: Trusty
  72. - Tegra194: Trusty
  73. Scatter files
  74. -------------
  75. Tegra platforms currently support scatter files and ld.S scripts. The scatter
  76. files help support ARMLINK linker to generate BL31 binaries. For now, there
  77. exists a common scatter file, plat/nvidia/tegra/scat/bl31.scat, for all Tegra
  78. SoCs. The `LINKER` build variable needs to point to the ARMLINK binary for
  79. the scatter file to be used. Tegra platforms have verified BL31 image generation
  80. with ARMCLANG (compilation) and ARMLINK (linking) for the Tegra186 platforms.
  81. Preparing the BL31 image to run on Tegra SoCs
  82. ---------------------------------------------
  83. .. code:: shell
  84. CROSS_COMPILE=<path-to-aarch64-gcc>/bin/aarch64-none-elf- make PLAT=tegra \
  85. TARGET_SOC=<target-soc e.g. t194|t186|t210> SPD=<dispatcher e.g. trusty|tlkd>
  86. bl31
  87. Platforms wanting to use different TZDRAM\_BASE, can add ``TZDRAM_BASE=<value>``
  88. to the build command line.
  89. The Tegra platform code expects a pointer to the following platform specific
  90. structure via 'x1' register from the BL2 layer which is used by the
  91. bl31\_early\_platform\_setup() handler to extract the TZDRAM carveout base and
  92. size for loading the Trusted OS and the UART port ID to be used. The Tegra
  93. memory controller driver programs this base/size in order to restrict NS
  94. accesses.
  95. typedef struct plat\_params\_from\_bl2 {
  96. /\* TZ memory size */
  97. uint64\_t tzdram\_size;
  98. /* TZ memory base */
  99. uint64\_t tzdram\_base;
  100. /* UART port ID \*/
  101. int uart\_id;
  102. /* L2 ECC parity protection disable flag \*/
  103. int l2\_ecc\_parity\_prot\_dis;
  104. /* SHMEM base address for storing the boot logs \*/
  105. uint64\_t boot\_profiler\_shmem\_base;
  106. } plat\_params\_from\_bl2\_t;
  107. Power Management
  108. ----------------
  109. The PSCI implementation expects each platform to expose the 'power state'
  110. parameter to be used during the 'SYSTEM SUSPEND' call. The state-id field
  111. is implementation defined on Tegra SoCs and is preferably defined by
  112. tegra\_def.h.
  113. Tegra configs
  114. -------------
  115. - 'tegra\_enable\_l2\_ecc\_parity\_prot': This flag enables the L2 ECC and Parity
  116. Protection bit, for Arm Cortex-A57 CPUs, during CPU boot. This flag will
  117. be enabled by Tegrs SoCs during 'Cluster power up' or 'System Suspend' exit.