workers.rst 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279
  1. Scaling synapse via workers
  2. ===========================
  3. Synapse has experimental support for splitting out functionality into
  4. multiple separate python processes, helping greatly with scalability. These
  5. processes are called 'workers', and are (eventually) intended to scale
  6. horizontally independently.
  7. All of the below is highly experimental and subject to change as Synapse evolves,
  8. but documenting it here to help folks needing highly scalable Synapses similar
  9. to the one running matrix.org!
  10. All processes continue to share the same database instance, and as such, workers
  11. only work with postgres based synapse deployments (sharing a single sqlite
  12. across multiple processes is a recipe for disaster, plus you should be using
  13. postgres anyway if you care about scalability).
  14. The workers communicate with the master synapse process via a synapse-specific
  15. TCP protocol called 'replication' - analogous to MySQL or Postgres style
  16. database replication; feeding a stream of relevant data to the workers so they
  17. can be kept in sync with the main synapse process and database state.
  18. Configuration
  19. -------------
  20. To make effective use of the workers, you will need to configure an HTTP
  21. reverse-proxy such as nginx or haproxy, which will direct incoming requests to
  22. the correct worker, or to the main synapse instance. Note that this includes
  23. requests made to the federation port. See `<reverse_proxy.rst>`_ for
  24. information on setting up a reverse proxy.
  25. To enable workers, you need to add two replication listeners to the master
  26. synapse, e.g.::
  27. listeners:
  28. # The TCP replication port
  29. - port: 9092
  30. bind_address: '127.0.0.1'
  31. type: replication
  32. # The HTTP replication port
  33. - port: 9093
  34. bind_address: '127.0.0.1'
  35. type: http
  36. resources:
  37. - names: [replication]
  38. Under **no circumstances** should these replication API listeners be exposed to
  39. the public internet; it currently implements no authentication whatsoever and is
  40. unencrypted.
  41. (Roughly, the TCP port is used for streaming data from the master to the
  42. workers, and the HTTP port for the workers to send data to the main
  43. synapse process.)
  44. You then create a set of configs for the various worker processes. These
  45. should be worker configuration files, and should be stored in a dedicated
  46. subdirectory, to allow synctl to manipulate them. An additional configuration
  47. for the master synapse process will need to be created because the process will
  48. not be started automatically. That configuration should look like this::
  49. worker_app: synapse.app.homeserver
  50. daemonize: true
  51. Each worker configuration file inherits the configuration of the main homeserver
  52. configuration file. You can then override configuration specific to that worker,
  53. e.g. the HTTP listener that it provides (if any); logging configuration; etc.
  54. You should minimise the number of overrides though to maintain a usable config.
  55. You must specify the type of worker application (``worker_app``). The currently
  56. available worker applications are listed below. You must also specify the
  57. replication endpoints that it's talking to on the main synapse process.
  58. ``worker_replication_host`` should specify the host of the main synapse,
  59. ``worker_replication_port`` should point to the TCP replication listener port and
  60. ``worker_replication_http_port`` should point to the HTTP replication port.
  61. Currently, the ``event_creator`` and ``federation_reader`` workers require specifying
  62. ``worker_replication_http_port``.
  63. For instance::
  64. worker_app: synapse.app.synchrotron
  65. # The replication listener on the synapse to talk to.
  66. worker_replication_host: 127.0.0.1
  67. worker_replication_port: 9092
  68. worker_replication_http_port: 9093
  69. worker_listeners:
  70. - type: http
  71. port: 8083
  72. resources:
  73. - names:
  74. - client
  75. worker_daemonize: True
  76. worker_pid_file: /home/matrix/synapse/synchrotron.pid
  77. worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
  78. ...is a full configuration for a synchrotron worker instance, which will expose a
  79. plain HTTP ``/sync`` endpoint on port 8083 separately from the ``/sync`` endpoint provided
  80. by the main synapse.
  81. Obviously you should configure your reverse-proxy to route the relevant
  82. endpoints to the worker (``localhost:8083`` in the above example).
  83. Finally, to actually run your worker-based synapse, you must pass synctl the -a
  84. commandline option to tell it to operate on all the worker configurations found
  85. in the given directory, e.g.::
  86. synctl -a $CONFIG/workers start
  87. Currently one should always restart all workers when restarting or upgrading
  88. synapse, unless you explicitly know it's safe not to. For instance, restarting
  89. synapse without restarting all the synchrotrons may result in broken typing
  90. notifications.
  91. To manipulate a specific worker, you pass the -w option to synctl::
  92. synctl -w $CONFIG/workers/synchrotron.yaml restart
  93. Available worker applications
  94. -----------------------------
  95. ``synapse.app.pusher``
  96. ~~~~~~~~~~~~~~~~~~~~~~
  97. Handles sending push notifications to sygnal and email. Doesn't handle any
  98. REST endpoints itself, but you should set ``start_pushers: False`` in the
  99. shared configuration file to stop the main synapse sending these notifications.
  100. Note this worker cannot be load-balanced: only one instance should be active.
  101. ``synapse.app.synchrotron``
  102. ~~~~~~~~~~~~~~~~~~~~~~~~~~~
  103. The synchrotron handles ``sync`` requests from clients. In particular, it can
  104. handle REST endpoints matching the following regular expressions::
  105. ^/_matrix/client/(v2_alpha|r0)/sync$
  106. ^/_matrix/client/(api/v1|v2_alpha|r0)/events$
  107. ^/_matrix/client/(api/v1|r0)/initialSync$
  108. ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
  109. The above endpoints should all be routed to the synchrotron worker by the
  110. reverse-proxy configuration.
  111. It is possible to run multiple instances of the synchrotron to scale
  112. horizontally. In this case the reverse-proxy should be configured to
  113. load-balance across the instances, though it will be more efficient if all
  114. requests from a particular user are routed to a single instance. Extracting
  115. a userid from the access token is currently left as an exercise for the reader.
  116. ``synapse.app.appservice``
  117. ~~~~~~~~~~~~~~~~~~~~~~~~~~
  118. Handles sending output traffic to Application Services. Doesn't handle any
  119. REST endpoints itself, but you should set ``notify_appservices: False`` in the
  120. shared configuration file to stop the main synapse sending these notifications.
  121. Note this worker cannot be load-balanced: only one instance should be active.
  122. ``synapse.app.federation_reader``
  123. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  124. Handles a subset of federation endpoints. In particular, it can handle REST
  125. endpoints matching the following regular expressions::
  126. ^/_matrix/federation/v1/event/
  127. ^/_matrix/federation/v1/state/
  128. ^/_matrix/federation/v1/state_ids/
  129. ^/_matrix/federation/v1/backfill/
  130. ^/_matrix/federation/v1/get_missing_events/
  131. ^/_matrix/federation/v1/publicRooms
  132. ^/_matrix/federation/v1/query/
  133. ^/_matrix/federation/v1/make_join/
  134. ^/_matrix/federation/v1/make_leave/
  135. ^/_matrix/federation/v1/send_join/
  136. ^/_matrix/federation/v1/send_leave/
  137. ^/_matrix/federation/v1/invite/
  138. ^/_matrix/federation/v1/query_auth/
  139. ^/_matrix/federation/v1/event_auth/
  140. ^/_matrix/federation/v1/exchange_third_party_invite/
  141. ^/_matrix/federation/v1/send/
  142. ^/_matrix/key/v2/query
  143. The above endpoints should all be routed to the federation_reader worker by the
  144. reverse-proxy configuration.
  145. The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single
  146. instance.
  147. ``synapse.app.federation_sender``
  148. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  149. Handles sending federation traffic to other servers. Doesn't handle any
  150. REST endpoints itself, but you should set ``send_federation: False`` in the
  151. shared configuration file to stop the main synapse sending this traffic.
  152. Note this worker cannot be load-balanced: only one instance should be active.
  153. ``synapse.app.media_repository``
  154. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  155. Handles the media repository. It can handle all endpoints starting with::
  156. /_matrix/media/
  157. You should also set ``enable_media_repo: False`` in the shared configuration
  158. file to stop the main synapse running background jobs related to managing the
  159. media repository.
  160. Note this worker cannot be load-balanced: only one instance should be active.
  161. ``synapse.app.client_reader``
  162. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  163. Handles client API endpoints. It can handle REST endpoints matching the
  164. following regular expressions::
  165. ^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
  166. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
  167. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
  168. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
  169. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
  170. ^/_matrix/client/(api/v1|r0|unstable)/login$
  171. ^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
  172. Additionally, the following REST endpoints can be handled, but all requests must
  173. be routed to the same instance::
  174. ^/_matrix/client/(r0|unstable)/register$
  175. ``synapse.app.user_dir``
  176. ~~~~~~~~~~~~~~~~~~~~~~~~
  177. Handles searches in the user directory. It can handle REST endpoints matching
  178. the following regular expressions::
  179. ^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
  180. ``synapse.app.frontend_proxy``
  181. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  182. Proxies some frequently-requested client endpoints to add caching and remove
  183. load from the main synapse. It can handle REST endpoints matching the following
  184. regular expressions::
  185. ^/_matrix/client/(api/v1|r0|unstable)/keys/upload
  186. If ``use_presence`` is False in the homeserver config, it can also handle REST
  187. endpoints matching the following regular expressions::
  188. ^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
  189. This "stub" presence handler will pass through ``GET`` request but make the
  190. ``PUT`` effectively a no-op.
  191. It will proxy any requests it cannot handle to the main synapse instance. It
  192. must therefore be configured with the location of the main instance, via
  193. the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration
  194. file. For example::
  195. worker_main_http_uri: http://127.0.0.1:8008
  196. ``synapse.app.event_creator``
  197. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  198. Handles some event creation. It can handle REST endpoints matching::
  199. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
  200. ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
  201. ^/_matrix/client/(api/v1|r0|unstable)/join/
  202. ^/_matrix/client/(api/v1|r0|unstable)/profile/
  203. It will create events locally and then send them on to the main synapse
  204. instance to be persisted and handled.