workers.rst 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271
  1. Scaling synapse via workers
  2. ===========================
  3. Synapse has experimental support for splitting out functionality into
  4. multiple separate python processes, helping greatly with scalability. These
  5. processes are called 'workers', and are (eventually) intended to scale
  6. horizontally independently.
  7. All of the below is highly experimental and subject to change as Synapse evolves,
  8. but documenting it here to help folks needing highly scalable Synapses similar
  9. to the one running matrix.org!
  10. All processes continue to share the same database instance, and as such, workers
  11. only work with postgres based synapse deployments (sharing a single sqlite
  12. across multiple processes is a recipe for disaster, plus you should be using
  13. postgres anyway if you care about scalability).
  14. The workers communicate with the master synapse process via a synapse-specific
  15. TCP protocol called 'replication' - analogous to MySQL or Postgres style
  16. database replication; feeding a stream of relevant data to the workers so they
  17. can be kept in sync with the main synapse process and database state.
  18. Configuration
  19. -------------
  20. To make effective use of the workers, you will need to configure an HTTP
  21. reverse-proxy such as nginx or haproxy, which will direct incoming requests to
  22. the correct worker, or to the main synapse instance. Note that this includes
  23. requests made to the federation port. The caveats regarding running a
  24. reverse-proxy on the federation port still apply (see
  25. https://github.com/matrix-org/synapse/blob/master/README.rst#reverse-proxying-the-federation-port).
  26. To enable workers, you need to add two replication listeners to the master
  27. synapse, e.g.::
  28. listeners:
  29. # The TCP replication port
  30. - port: 9092
  31. bind_address: '127.0.0.1'
  32. type: replication
  33. # The HTTP replication port
  34. - port: 9093
  35. bind_address: '127.0.0.1'
  36. type: http
  37. resources:
  38. - names: [replication]
  39. Under **no circumstances** should these replication API listeners be exposed to
  40. the public internet; it currently implements no authentication whatsoever and is
  41. unencrypted.
  42. (Roughly, the TCP port is used for streaming data from the master to the
  43. workers, and the HTTP port for the workers to send data to the main
  44. synapse process.)
  45. You then create a set of configs for the various worker processes. These
  46. should be worker configuration files, and should be stored in a dedicated
  47. subdirectory, to allow synctl to manipulate them. An additional configuration
  48. for the master synapse process will need to be created because the process will
  49. not be started automatically. That configuration should look like this::
  50. worker_app: synapse.app.homeserver
  51. daemonize: true
  52. Each worker configuration file inherits the configuration of the main homeserver
  53. configuration file. You can then override configuration specific to that worker,
  54. e.g. the HTTP listener that it provides (if any); logging configuration; etc.
  55. You should minimise the number of overrides though to maintain a usable config.
  56. You must specify the type of worker application (``worker_app``). The currently
  57. available worker applications are listed below. You must also specify the
  58. replication endpoints that it's talking to on the main synapse process.
  59. ``worker_replication_host`` should specify the host of the main synapse,
  60. ``worker_replication_port`` should point to the TCP replication listener port and
  61. ``worker_replication_http_port`` should point to the HTTP replication port.
  62. Currently, the ``event_creator`` and ``federation_reader`` workers require specifying
  63. ``worker_replication_http_port``.
  64. For instance::
  65. worker_app: synapse.app.synchrotron
  66. # The replication listener on the synapse to talk to.
  67. worker_replication_host: 127.0.0.1
  68. worker_replication_port: 9092
  69. worker_replication_http_port: 9093
  70. worker_listeners:
  71. - type: http
  72. port: 8083
  73. resources:
  74. - names:
  75. - client
  76. worker_daemonize: True
  77. worker_pid_file: /home/matrix/synapse/synchrotron.pid
  78. worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
  79. ...is a full configuration for a synchrotron worker instance, which will expose a
  80. plain HTTP ``/sync`` endpoint on port 8083 separately from the ``/sync`` endpoint provided
  81. by the main synapse.
  82. Obviously you should configure your reverse-proxy to route the relevant
  83. endpoints to the worker (``localhost:8083`` in the above example).
  84. Finally, to actually run your worker-based synapse, you must pass synctl the -a
  85. commandline option to tell it to operate on all the worker configurations found
  86. in the given directory, e.g.::
  87. synctl -a $CONFIG/workers start
  88. Currently one should always restart all workers when restarting or upgrading
  89. synapse, unless you explicitly know it's safe not to. For instance, restarting
  90. synapse without restarting all the synchrotrons may result in broken typing
  91. notifications.
  92. To manipulate a specific worker, you pass the -w option to synctl::
  93. synctl -w $CONFIG/workers/synchrotron.yaml restart
  94. Available worker applications
  95. -----------------------------
  96. ``synapse.app.pusher``
  97. ~~~~~~~~~~~~~~~~~~~~~~
  98. Handles sending push notifications to sygnal and email. Doesn't handle any
  99. REST endpoints itself, but you should set ``start_pushers: False`` in the
  100. shared configuration file to stop the main synapse sending these notifications.
  101. Note this worker cannot be load-balanced: only one instance should be active.
  102. ``synapse.app.synchrotron``
  103. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  104. The synchrotron handles ``sync`` requests from clients. In particular, it can
  105. handle REST endpoints matching the following regular expressions::
  106. ^/_matrix/client/(v2_alpha|r0)/sync$
  107. ^/_matrix/client/(api/v1|v2_alpha|r0)/events$
  108. ^/_matrix/client/(api/v1|r0)/initialSync$
  109. ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
  110. The above endpoints should all be routed to the synchrotron worker by the
  111. reverse-proxy configuration.
  112. It is possible to run multiple instances of the synchrotron to scale
  113. horizontally. In this case the reverse-proxy should be configured to
  114. load-balance across the instances, though it will be more efficient if all
  115. requests from a particular user are routed to a single instance. Extracting
  116. a userid from the access token is currently left as an exercise for the reader.
  117. ``synapse.app.appservice``
  118. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  119. Handles sending output traffic to Application Services. Doesn't handle any
  120. REST endpoints itself, but you should set ``notify_appservices: False`` in the
  121. shared configuration file to stop the main synapse sending these notifications.
  122. Note this worker cannot be load-balanced: only one instance should be active.
  123. ``synapse.app.federation_reader``
  124. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  125. Handles a subset of federation endpoints. In particular, it can handle REST
  126. endpoints matching the following regular expressions::
  127. ^/_matrix/federation/v1/event/
  128. ^/_matrix/federation/v1/state/
  129. ^/_matrix/federation/v1/state_ids/
  130. ^/_matrix/federation/v1/backfill/
  131. ^/_matrix/federation/v1/get_missing_events/
  132. ^/_matrix/federation/v1/publicRooms
  133. ^/_matrix/federation/v1/query/
  134. ^/_matrix/federation/v1/make_join/
  135. ^/_matrix/federation/v1/make_leave/
  136. ^/_matrix/federation/v1/send_join/
  137. ^/_matrix/federation/v1/send_leave/
  138. ^/_matrix/federation/v1/invite/
  139. ^/_matrix/federation/v1/query_auth/
  140. ^/_matrix/federation/v1/event_auth/
  141. ^/_matrix/federation/v1/exchange_third_party_invite/
  142. ^/_matrix/federation/v1/send/
  143. The above endpoints should all be routed to the federation_reader worker by the
  144. reverse-proxy configuration.
  145. The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single
  146. instance.
  147. ``synapse.app.federation_sender``
  148. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  149. Handles sending federation traffic to other servers. Doesn't handle any
  150. REST endpoints itself, but you should set ``send_federation: False`` in the
  151. shared configuration file to stop the main synapse sending this traffic.
  152. Note this worker cannot be load-balanced: only one instance should be active.
  153. ``synapse.app.media_repository``
  154. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  155. Handles the media repository. It can handle all endpoints starting with::
  156. /_matrix/media/
  157. You should also set ``enable_media_repo: False`` in the shared configuration
  158. file to stop the main synapse running background jobs related to managing the
  159. media repository.
  160. Note this worker cannot be load-balanced: only one instance should be active.
  161. ``synapse.app.client_reader``
  162. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  163. Handles client API endpoints. It can handle REST endpoints matching the
  164. following regular expressions::
  165. ^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
  166. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
  167. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
  168. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
  169. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
  170. ``synapse.app.user_dir``
  171. ~~~~~~~~~~~~~~~~~~~~~~~~
  172. Handles searches in the user directory. It can handle REST endpoints matching
  173. the following regular expressions::
  174. ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
  175. ``synapse.app.frontend_proxy``
  176. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  177. Proxies some frequently-requested client endpoints to add caching and remove
  178. load from the main synapse. It can handle REST endpoints matching the following
  179. regular expressions::
  180. ^/_matrix/client/(api/v1|r0|unstable)/keys/upload
  181. If ``use_presence`` is False in the homeserver config, it can also handle REST
  182. endpoints matching the following regular expressions::
  183. ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
  184. This "stub" presence handler will pass through ``GET`` request but make the
  185. ``PUT`` effectively a no-op.
  186. It will proxy any requests it cannot handle to the main synapse instance. It
  187. must therefore be configured with the location of the main instance, via
  188. the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration
  189. file. For example::
  190. worker_main_http_uri: http://127.0.0.1:8008
  191. ``synapse.app.event_creator``
  192. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  193. Handles some event creation. It can handle REST endpoints matching::
  194. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
  195. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
  196. ^/_matrix/client/(api/v1|r0|unstable)/join/
  197. ^/_matrix/client/(api/v1|r0|unstable)/profile/
  198. It will create events locally and then send them on to the main synapse
  199. instance to be persisted and handled.