psci-performance-juno.rst 27 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534
  1. PSCI Performance Measurements on Arm Juno Development Platform
  2. ==============================================================
  3. This document summarises the findings of performance measurements of key
  4. operations in the Trusted Firmware-A Power State Coordination Interface (PSCI)
  5. implementation, using the in-built Performance Measurement Framework (PMF) and
  6. runtime instrumentation timestamps.
  7. Method
  8. ------
  9. We used the `Juno R1 platform`_ for these tests, which has 4 x Cortex-A53 and 2
  10. x Cortex-A57 clusters running at the following frequencies:
  11. +-----------------+--------------------+
  12. | Domain | Frequency (MHz) |
  13. +=================+====================+
  14. | Cortex-A57 | 900 (nominal) |
  15. +-----------------+--------------------+
  16. | Cortex-A53 | 650 (underdrive) |
  17. +-----------------+--------------------+
  18. | AXI subsystem | 533 |
  19. +-----------------+--------------------+
  20. Juno supports CPU, cluster and system power down states, corresponding to power
  21. levels 0, 1 and 2 respectively. It does not support any retention states.
  22. Given that runtime instrumentation using PMF is invasive, there is a small
  23. (unquantified) overhead on the results. PMF uses the generic counter for
  24. timestamps, which runs at 50MHz on Juno.
  25. The following source trees and binaries were used:
  26. - `TF-A v2.12-rc0`_
  27. - `TFTF v2.12-rc0`_
  28. Please see the Runtime Instrumentation :ref:`Testing Methodology
  29. <Runtime Instrumentation Methodology>`
  30. page for more details.
  31. Procedure
  32. ---------
  33. #. Build TFTF with runtime instrumentation enabled:
  34. .. code:: shell
  35. make CROSS_COMPILE=aarch64-none-elf- PLAT=juno \
  36. TESTS=runtime-instrumentation all
  37. #. Fetch Juno's SCP binary from TF-A's archive:
  38. .. code:: shell
  39. curl --fail --connect-timeout 5 --retry 5 -sLS -o scp_bl2.bin \
  40. https://downloads.trustedfirmware.org/tf-a/css_scp_2.12.0/juno/release/juno-bl2.bin
  41. #. Build TF-A with the following build options:
  42. .. code:: shell
  43. make CROSS_COMPILE=aarch64-none-elf- PLAT=juno \
  44. BL33="/path/to/tftf.bin" SCP_BL2="scp_bl2.bin" \
  45. ENABLE_RUNTIME_INSTRUMENTATION=1 fiptool all fip
  46. #. Load the following images onto the development board: ``fip.bin``,
  47. ``scp_bl2.bin``.
  48. Results
  49. -------
  50. ``CPU_SUSPEND`` to deepest power level
  51. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  52. .. table:: ``CPU_SUSPEND`` latencies (µs) to deepest power level in
  53. parallel (v2.12)
  54. +---------+------+-------------------+------------------+--------------------+
  55. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  56. +---------+------+-------------------+------------------+--------------------+
  57. | 0 | 0 | 244.52 (-65.43%) | 26.92 (-32.60%) | 5.54 (-96.70%) |
  58. +---------+------+-------------------+------------------+--------------------+
  59. | 0 | 1 | 526.18 (+105.12%) | 416.1 | 138.52 (+2011.59%) |
  60. +---------+------+-------------------+------------------+--------------------+
  61. | 1 | 0 | 104.34 | 27.02 (-94.62%) | 5.32 |
  62. +---------+------+-------------------+------------------+--------------------+
  63. | 1 | 1 | 384.98 | 23.06 (-85.40%) | 4.48 |
  64. +---------+------+-------------------+------------------+--------------------+
  65. | 1 | 2 | 812.44 (+45.94%) | 126.78 | 4.54 |
  66. +---------+------+-------------------+------------------+--------------------+
  67. | 1 | 3 | 986.84 | 77.22 (+176.58%) | 79.76 |
  68. +---------+------+-------------------+------------------+--------------------+
  69. .. table:: ``CPU_SUSPEND`` latencies (µs) to deepest power level in
  70. parallel (v2.11)
  71. +---------+------+-------------------+--------------------+-------------+
  72. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  73. +---------+------+-------------------+--------------------+-------------+
  74. | 0 | 0 | 112.98 (-53.44%) | 26.16 (-89.33%) | 5.48 |
  75. +---------+------+-------------------+--------------------+-------------+
  76. | 0 | 1 | 411.18 | 438.88 (+1572.56%) | 138.54 |
  77. +---------+------+-------------------+--------------------+-------------+
  78. | 1 | 0 | 261.82 (+150.88%) | 474.06 (+1649.30%) | 5.6 |
  79. +---------+------+-------------------+--------------------+-------------+
  80. | 1 | 1 | 714.76 (+86.84%) | 26.44 | 4.48 |
  81. +---------+------+-------------------+--------------------+-------------+
  82. | 1 | 2 | 862.66 | 149.34 (-45.00%) | 4.38 |
  83. +---------+------+-------------------+--------------------+-------------+
  84. | 1 | 3 | 1045.12 | 98.12 (-55.76%) | 79.74 |
  85. +---------+------+-------------------+--------------------+-------------+
  86. .. table:: ``CPU_SUSPEND`` latencies (µs) to deepest power level in
  87. serial (v2.12)
  88. +---------+------+-----------+-----------------+-------------+
  89. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  90. +---------+------+-----------+-----------------+-------------+
  91. | 0 | 0 | 236.36 | 27.94 (-31.52%) | 138.0 |
  92. +---------+------+-----------+-----------------+-------------+
  93. | 0 | 1 | 236.58 | 27.86 (-31.72%) | 138.2 |
  94. +---------+------+-----------+-----------------+-------------+
  95. | 1 | 0 | 280.68 | 27.02 | 77.6 |
  96. +---------+------+-----------+-----------------+-------------+
  97. | 1 | 1 | 101.4 | 22.52 | 4.42 |
  98. +---------+------+-----------+-----------------+-------------+
  99. | 1 | 2 | 100.92 | 22.68 | 4.4 |
  100. +---------+------+-----------+-----------------+-------------+
  101. | 1 | 3 | 100.96 | 22.54 | 4.38 |
  102. +---------+------+-----------+-----------------+-------------+
  103. .. table:: ``CPU_SUSPEND`` latencies (µs) to deepest power level in
  104. serial (v2.11)
  105. +---------+------+-----------+--------+-------------+
  106. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  107. +---------+------+-----------+--------+-------------+
  108. | 0 | 0 | 244.42 | 27.42 | 138.12 |
  109. +---------+------+-----------+--------+-------------+
  110. | 0 | 1 | 245.02 | 27.34 | 138.08 |
  111. +---------+------+-----------+--------+-------------+
  112. | 1 | 0 | 297.66 | 26.2 | 77.68 |
  113. +---------+------+-----------+--------+-------------+
  114. | 1 | 1 | 108.02 | 21.94 | 4.52 |
  115. +---------+------+-----------+--------+-------------+
  116. | 1 | 2 | 107.48 | 21.88 | 4.46 |
  117. +---------+------+-----------+--------+-------------+
  118. | 1 | 3 | 107.52 | 21.86 | 4.46 |
  119. +---------+------+-----------+--------+-------------+
  120. ``CPU_SUSPEND`` to power level 0
  121. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  122. .. table:: ``CPU_SUSPEND`` latencies (µs) to power level 0 in
  123. parallel (v2.12)
  124. +--------------------------------------------------------------------+
  125. | test_rt_instr_cpu_susp_parallel |
  126. +---------+------+-------------------+-----------------+-------------+
  127. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  128. +---------+------+-------------------+-----------------+-------------+
  129. | 0 | 0 | 663.12 | 19.66 (-39.21%) | 8.26 |
  130. +---------+------+-------------------+-----------------+-------------+
  131. | 0 | 1 | 804.18 | 19.24 (-40.65%) | 8.1 |
  132. +---------+------+-------------------+-----------------+-------------+
  133. | 1 | 0 | 105.58 (-58.80%) | 19.68 | 7.42 |
  134. +---------+------+-------------------+-----------------+-------------+
  135. | 1 | 1 | 245.02 (-39.67%) | 19.8 | 6.82 |
  136. +---------+------+-------------------+-----------------+-------------+
  137. | 1 | 2 | 383.82 (-30.83%) | 18.84 | 7.06 |
  138. +---------+------+-------------------+-----------------+-------------+
  139. | 1 | 3 | 523.36 (+391.23%) | 19.0 | 7.3 |
  140. +---------+------+-------------------+-----------------+-------------+
  141. .. table:: ``CPU_SUSPEND`` latencies (µs) to power level 0 in
  142. parallel (v2.11)
  143. +---------+------+-------------------+--------+-------------+
  144. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  145. +---------+------+-------------------+--------+-------------+
  146. | 0 | 0 | 704.46 | 19.28 | 7.86 |
  147. +---------+------+-------------------+--------+-------------+
  148. | 0 | 1 | 853.66 | 18.78 | 7.82 |
  149. +---------+------+-------------------+--------+-------------+
  150. | 1 | 0 | 556.52 (+425.51%) | 19.06 | 7.82 |
  151. +---------+------+-------------------+--------+-------------+
  152. | 1 | 1 | 113.28 (-70.47%) | 19.28 | 7.48 |
  153. +---------+------+-------------------+--------+-------------+
  154. | 1 | 2 | 260.62 (-50.22%) | 19.8 | 7.26 |
  155. +---------+------+-------------------+--------+-------------+
  156. | 1 | 3 | 408.16 (+66.94%) | 19.82 | 7.38 |
  157. +---------+------+-------------------+--------+-------------+
  158. .. table:: ``CPU_SUSPEND`` latencies (µs) to power level 0 in serial (v2.12)
  159. +---------+------+-----------+-----------------+-------------+
  160. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  161. +---------+------+-----------+-----------------+-------------+
  162. | 0 | 0 | 100.04 | 20.32 (-38.50%) | 5.62 |
  163. +---------+------+-----------+-----------------+-------------+
  164. | 0 | 1 | 99.78 | 20.6 (-36.10%) | 5.42 |
  165. +---------+------+-----------+-----------------+-------------+
  166. | 1 | 0 | 278.28 | 19.52 | 4.32 |
  167. +---------+------+-----------+-----------------+-------------+
  168. | 1 | 1 | 97.3 | 19.44 | 4.26 |
  169. +---------+------+-----------+-----------------+-------------+
  170. | 1 | 2 | 97.56 | 19.52 | 4.32 |
  171. +---------+------+-----------+-----------------+-------------+
  172. | 1 | 3 | 97.52 | 19.46 | 4.26 |
  173. +---------+------+-----------+-----------------+-------------+
  174. .. table:: ``CPU_SUSPEND`` latencies (µs) to power level 0 in serial (v2.11)
  175. +---------+------+-----------+--------+-------------+
  176. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  177. +---------+------+-----------+--------+-------------+
  178. | 0 | 0 | 106.78 | 19.2 | 5.32 |
  179. +---------+------+-----------+--------+-------------+
  180. | 0 | 1 | 107.44 | 19.64 | 5.44 |
  181. +---------+------+-----------+--------+-------------+
  182. | 1 | 0 | 295.82 | 19.14 | 4.34 |
  183. +---------+------+-----------+--------+-------------+
  184. | 1 | 1 | 104.34 | 19.18 | 4.28 |
  185. +---------+------+-----------+--------+-------------+
  186. | 1 | 2 | 103.96 | 19.34 | 4.4 |
  187. +---------+------+-----------+--------+-------------+
  188. | 1 | 3 | 104.32 | 19.18 | 4.34 |
  189. +---------+------+-----------+--------+-------------+
  190. ``CPU_OFF`` on all non-lead CPUs
  191. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  192. ``CPU_OFF`` on all non-lead CPUs in sequence then, ``CPU_SUSPEND`` on the lead
  193. core to the deepest power level.
  194. .. table:: ``CPU_OFF`` latencies (µs) on all non-lead CPUs (v2.12)
  195. +---------+------+-----------+-----------------+-------------+
  196. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  197. +---------+------+-----------+-----------------+-------------+
  198. | 0 | 0 | 236.3 | 30.88 (-29.30%) | 137.76 |
  199. +---------+------+-----------+-----------------+-------------+
  200. | 0 | 1 | 236.66 | 30.5 (-29.23%) | 138.02 |
  201. +---------+------+-----------+-----------------+-------------+
  202. | 1 | 0 | 175.9 | 27.0 | 77.86 |
  203. +---------+------+-----------+-----------------+-------------+
  204. | 1 | 1 | 100.96 | 27.56 | 4.26 |
  205. +---------+------+-----------+-----------------+-------------+
  206. | 1 | 2 | 101.04 | 26.48 | 4.38 |
  207. +---------+------+-----------+-----------------+-------------+
  208. | 1 | 3 | 101.08 | 26.74 | 4.4 |
  209. +---------+------+-----------+-----------------+-------------+
  210. .. table:: ``CPU_OFF`` latencies (µs) on all non-lead CPUs (v2.11)
  211. +---------+------+-----------+--------+-------------+
  212. | Cluster | Core | Powerdown | Wakeup | Cache Flush |
  213. +---------+------+-----------+--------+-------------+
  214. | 0 | 0 | 243.62 | 29.84 | 137.66 |
  215. +---------+------+-----------+--------+-------------+
  216. | 0 | 1 | 243.88 | 29.54 | 137.8 |
  217. +---------+------+-----------+--------+-------------+
  218. | 1 | 0 | 183.26 | 26.22 | 77.76 |
  219. +---------+------+-----------+--------+-------------+
  220. | 1 | 1 | 107.64 | 26.74 | 4.34 |
  221. +---------+------+-----------+--------+-------------+
  222. | 1 | 2 | 107.52 | 25.9 | 4.32 |
  223. +---------+------+-----------+--------+-------------+
  224. | 1 | 3 | 107.74 | 25.8 | 4.34 |
  225. +---------+------+-----------+--------+-------------+
  226. ``CPU_VERSION`` in parallel
  227. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  228. .. table:: ``CPU_VERSION`` latency (µs) in parallel on all cores (2.12)
  229. +-------------+--------+--------------+
  230. | Cluster | Core | Latency |
  231. +-------------+--------+--------------+
  232. | 0 | 0 | 1.0 |
  233. +-------------+--------+--------------+
  234. | 0 | 1 | 1.02 |
  235. +-------------+--------+--------------+
  236. | 1 | 0 | 0.52 |
  237. +-------------+--------+--------------+
  238. | 1 | 1 | 0.94 |
  239. +-------------+--------+--------------+
  240. | 1 | 2 | 0.94 |
  241. +-------------+--------+--------------+
  242. | 1 | 3 | 0.92 |
  243. +-------------+--------+--------------+
  244. .. table:: ``CPU_VERSION`` latency (µs) in parallel on all cores (2.11)
  245. +-------------+--------+--------------+
  246. | Cluster | Core | Latency |
  247. +-------------+--------+--------------+
  248. | 0 | 0 | 1.26 |
  249. +-------------+--------+--------------+
  250. | 0 | 1 | 0.96 |
  251. +-------------+--------+--------------+
  252. | 1 | 0 | 0.54 |
  253. +-------------+--------+--------------+
  254. | 1 | 1 | 0.94 |
  255. +-------------+--------+--------------+
  256. | 1 | 2 | 0.92 |
  257. +-------------+--------+--------------+
  258. | 1 | 3 | 1.02 |
  259. +-------------+--------+--------------+
  260. Annotated Historic Results
  261. --------------------------
  262. The following results are based on the upstream `TF master as of 31/01/2017`_.
  263. TF-A was built using the same build instructions as detailed in the procedure
  264. above.
  265. In the results below, CPUs 0-3 refer to CPUs in the little cluster (A53) and
  266. CPUs 4-5 refer to CPUs in the big cluster (A57). In all cases CPU 4 is the lead
  267. CPU.
  268. ``PSCI_ENTRY`` corresponds to the powerdown latency, ``PSCI_EXIT`` the wakeup latency, and
  269. ``CFLUSH_OVERHEAD`` the latency of the cache flush operation.
  270. ``CPU_SUSPEND`` to deepest power level on all CPUs in parallel
  271. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  272. +-------+---------------------+--------------------+--------------------------+
  273. | CPU | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
  274. +=======+=====================+====================+==========================+
  275. | 0 | 27 | 20 | 5 |
  276. +-------+---------------------+--------------------+--------------------------+
  277. | 1 | 114 | 86 | 5 |
  278. +-------+---------------------+--------------------+--------------------------+
  279. | 2 | 202 | 58 | 5 |
  280. +-------+---------------------+--------------------+--------------------------+
  281. | 3 | 375 | 29 | 94 |
  282. +-------+---------------------+--------------------+--------------------------+
  283. | 4 | 20 | 22 | 6 |
  284. +-------+---------------------+--------------------+--------------------------+
  285. | 5 | 290 | 18 | 206 |
  286. +-------+---------------------+--------------------+--------------------------+
  287. A large variance in ``PSCI_ENTRY`` and ``PSCI_EXIT`` times across CPUs is
  288. observed due to TF PSCI lock contention. In the worst case, CPU 3 has to wait
  289. for the 3 other CPUs in the cluster (0-2) to complete ``PSCI_ENTRY`` and release
  290. the lock before proceeding.
  291. The ``CFLUSH_OVERHEAD`` times for CPUs 3 and 5 are higher because they are the
  292. last CPUs in their respective clusters to power down, therefore both the L1 and
  293. L2 caches are flushed.
  294. The ``CFLUSH_OVERHEAD`` time for CPU 5 is a lot larger than that for CPU 3
  295. because the L2 cache size for the big cluster is lot larger (2MB) compared to
  296. the little cluster (1MB).
  297. ``CPU_SUSPEND`` to power level 0 on all CPUs in parallel
  298. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  299. +-------+---------------------+--------------------+--------------------------+
  300. | CPU | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
  301. +=======+=====================+====================+==========================+
  302. | 0 | 116 | 14 | 8 |
  303. +-------+---------------------+--------------------+--------------------------+
  304. | 1 | 204 | 14 | 8 |
  305. +-------+---------------------+--------------------+--------------------------+
  306. | 2 | 287 | 13 | 8 |
  307. +-------+---------------------+--------------------+--------------------------+
  308. | 3 | 376 | 13 | 9 |
  309. +-------+---------------------+--------------------+--------------------------+
  310. | 4 | 29 | 15 | 7 |
  311. +-------+---------------------+--------------------+--------------------------+
  312. | 5 | 21 | 15 | 8 |
  313. +-------+---------------------+--------------------+--------------------------+
  314. There is no lock contention in TF generic code at power level 0 but the large
  315. variance in ``PSCI_ENTRY`` times across CPUs is due to lock contention in Juno
  316. platform code. The platform lock is used to mediate access to a single SCP
  317. communication channel. This is compounded by the SCP firmware waiting for each
  318. AP CPU to enter WFI before making the channel available to other CPUs, which
  319. effectively serializes the SCP power down commands from all CPUs.
  320. On platforms with a more efficient CPU power down mechanism, it should be
  321. possible to make the ``PSCI_ENTRY`` times smaller and consistent.
  322. The ``PSCI_EXIT`` times are consistent across all CPUs because TF does not
  323. require locks at power level 0.
  324. The ``CFLUSH_OVERHEAD`` times for all CPUs are small and consistent since only
  325. the cache associated with power level 0 is flushed (L1).
  326. ``CPU_SUSPEND`` to deepest power level on all CPUs in sequence
  327. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  328. +-------+---------------------+--------------------+--------------------------+
  329. | CPU | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
  330. +=======+=====================+====================+==========================+
  331. | 0 | 114 | 20 | 94 |
  332. +-------+---------------------+--------------------+--------------------------+
  333. | 1 | 114 | 20 | 94 |
  334. +-------+---------------------+--------------------+--------------------------+
  335. | 2 | 114 | 20 | 94 |
  336. +-------+---------------------+--------------------+--------------------------+
  337. | 3 | 114 | 20 | 94 |
  338. +-------+---------------------+--------------------+--------------------------+
  339. | 4 | 195 | 22 | 180 |
  340. +-------+---------------------+--------------------+--------------------------+
  341. | 5 | 21 | 17 | 6 |
  342. +-------+---------------------+--------------------+--------------------------+
  343. The ``CFLUSH_OVERHEAD`` times for lead CPU 4 and all CPUs in the non-lead cluster
  344. are large because all other CPUs in the cluster are powered down during the
  345. test. The ``CPU_SUSPEND`` call powers down to the cluster level, requiring a
  346. flush of both L1 and L2 caches.
  347. The ``CFLUSH_OVERHEAD`` time for CPU 4 is a lot larger than those for the little
  348. CPUs because the L2 cache size for the big cluster is lot larger (2MB) compared
  349. to the little cluster (1MB).
  350. The ``PSCI_ENTRY`` and ``CFLUSH_OVERHEAD`` times for CPU 5 are low because lead
  351. CPU 4 continues to run while CPU 5 is suspended. Hence CPU 5 only powers down to
  352. level 0, which only requires L1 cache flush.
  353. ``CPU_SUSPEND`` to power level 0 on all CPUs in sequence
  354. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  355. +-------+---------------------+--------------------+--------------------------+
  356. | CPU | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
  357. +=======+=====================+====================+==========================+
  358. | 0 | 22 | 14 | 5 |
  359. +-------+---------------------+--------------------+--------------------------+
  360. | 1 | 22 | 14 | 5 |
  361. +-------+---------------------+--------------------+--------------------------+
  362. | 2 | 21 | 14 | 5 |
  363. +-------+---------------------+--------------------+--------------------------+
  364. | 3 | 22 | 14 | 5 |
  365. +-------+---------------------+--------------------+--------------------------+
  366. | 4 | 17 | 14 | 6 |
  367. +-------+---------------------+--------------------+--------------------------+
  368. | 5 | 18 | 15 | 6 |
  369. +-------+---------------------+--------------------+--------------------------+
  370. Here the times are small and consistent since there is no contention and it is
  371. only necessary to flush the cache to power level 0 (L1). This is the best case
  372. scenario.
  373. The ``PSCI_ENTRY`` times for CPUs in the big cluster are slightly smaller than
  374. for the CPUs in little cluster due to greater CPU performance.
  375. The ``PSCI_EXIT`` times are generally lower than in the last test because the
  376. cluster remains powered on throughout the test and there is less code to execute
  377. on power on (for example, no need to enter CCI coherency)
  378. ``CPU_OFF`` on all non-lead CPUs in sequence then ``CPU_SUSPEND`` on lead CPU to deepest power level
  379. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  380. The test sequence here is as follows:
  381. 1. Call ``CPU_ON`` and ``CPU_OFF`` on each non-lead CPU in sequence.
  382. 2. Program wake up timer and suspend the lead CPU to the deepest power level.
  383. 3. Call ``CPU_ON`` on non-lead CPU to get the timestamps from each CPU.
  384. +-------+---------------------+--------------------+--------------------------+
  385. | CPU | ``PSCI_ENTRY`` (us) | ``PSCI_EXIT`` (us) | ``CFLUSH_OVERHEAD`` (us) |
  386. +=======+=====================+====================+==========================+
  387. | 0 | 110 | 28 | 93 |
  388. +-------+---------------------+--------------------+--------------------------+
  389. | 1 | 110 | 28 | 93 |
  390. +-------+---------------------+--------------------+--------------------------+
  391. | 2 | 110 | 28 | 93 |
  392. +-------+---------------------+--------------------+--------------------------+
  393. | 3 | 111 | 28 | 93 |
  394. +-------+---------------------+--------------------+--------------------------+
  395. | 4 | 195 | 22 | 181 |
  396. +-------+---------------------+--------------------+--------------------------+
  397. | 5 | 20 | 23 | 6 |
  398. +-------+---------------------+--------------------+--------------------------+
  399. The ``CFLUSH_OVERHEAD`` times for all little CPUs are large because all other
  400. CPUs in that cluster are powerered down during the test. The ``CPU_OFF`` call
  401. powers down to the cluster level, requiring a flush of both L1 and L2 caches.
  402. The ``PSCI_ENTRY`` and ``CFLUSH_OVERHEAD`` times for CPU 5 are small because
  403. lead CPU 4 is running and CPU 5 only powers down to level 0, which only requires
  404. an L1 cache flush.
  405. The ``CFLUSH_OVERHEAD`` time for CPU 4 is a lot larger than those for the little
  406. CPUs because the L2 cache size for the big cluster is lot larger (2MB) compared
  407. to the little cluster (1MB).
  408. The ``PSCI_EXIT`` times for CPUs in the big cluster are slightly smaller than
  409. for CPUs in the little cluster due to greater CPU performance. These times
  410. generally are greater than the ``PSCI_EXIT`` times in the ``CPU_SUSPEND`` tests
  411. because there is more code to execute in the "on finisher" compared to the
  412. "suspend finisher" (for example, GIC redistributor register programming).
  413. ``PSCI_VERSION`` on all CPUs in parallel
  414. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  415. Since very little code is associated with ``PSCI_VERSION``, this test
  416. approximates the round trip latency for handling a fast SMC at EL3 in TF.
  417. +-------+-------------------+
  418. | CPU | TOTAL TIME (ns) |
  419. +=======+===================+
  420. | 0 | 3020 |
  421. +-------+-------------------+
  422. | 1 | 2940 |
  423. +-------+-------------------+
  424. | 2 | 2980 |
  425. +-------+-------------------+
  426. | 3 | 3060 |
  427. +-------+-------------------+
  428. | 4 | 520 |
  429. +-------+-------------------+
  430. | 5 | 720 |
  431. +-------+-------------------+
  432. The times for the big CPUs are less than the little CPUs due to greater CPU
  433. performance.
  434. We suspect the time for lead CPU 4 is shorter than CPU 5 due to subtle cache
  435. effects, given that these measurements are at the nano-second level.
  436. --------------
  437. *Copyright (c) 2019-2024, Arm Limited and Contributors. All rights reserved.*
  438. .. _Juno R1 platform: https://developer.arm.com/documentation/100122/latest/
  439. .. _TF master as of 31/01/2017: https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/?id=c38b36d
  440. .. _TF-A v2.12-rc0: https://git.trustedfirmware.org/TF-A/trusted-firmware-a.git/tree/?h=v2.12-rc0
  441. .. _TFTF v2.12-rc0: https://git.trustedfirmware.org/TF-A/tf-a-tests.git/tree/?h=v2.12-rc0