workers.rst 4.1 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798
  1. Scaling synapse via workers
  2. ---------------------------
  3. Synapse has experimental support for splitting out functionality into
  4. multiple separate python processes, helping greatly with scalability. These
  5. processes are called 'workers', and are (eventually) intended to scale
  6. horizontally independently.
  7. All processes continue to share the same database instance, and as such, workers
  8. only work with postgres based synapse deployments (sharing a single sqlite
  9. across multiple processes is a recipe for disaster, plus you should be using
  10. postgres anyway if you care about scalability).
  11. The workers communicate with the master synapse process via a synapse-specific
  12. HTTP protocol called 'replication' - analogous to MySQL or Postgres style
  13. database replication; feeding a stream of relevant data to the workers so they
  14. can be kept in sync with the main synapse process and database state.
  15. To enable workers, you need to add a replication listener to the master synapse, e.g.::
  16. listeners:
  17. - port: 9092
  18. bind_address: '127.0.0.1'
  19. type: http
  20. tls: false
  21. x_forwarded: false
  22. resources:
  23. - names: [replication]
  24. compress: false
  25. Under **no circumstances** should this replication API listener be exposed to the
  26. public internet; it currently implements no authentication whatsoever and is
  27. unencrypted HTTP.
  28. You then create a set of configs for the various worker processes. These should be
  29. worker configuration files should be stored in a dedicated subdirectory, to allow
  30. synctl to manipulate them.
  31. The current available worker applications are:
  32. * synapse.app.pusher - handles sending push notifications to sygnal and email
  33. * synapse.app.synchrotron - handles /sync endpoints. can scales horizontally through multiple instances.
  34. * synapse.app.appservice - handles output traffic to Application Services
  35. * synapse.app.federation_reader - handles receiving federation traffic (including public_rooms API)
  36. * synapse.app.media_repository - handles the media repository.
  37. * synapse.app.client_reader - handles client API endpoints like /publicRooms
  38. Each worker configuration file inherits the configuration of the main homeserver
  39. configuration file. You can then override configuration specific to that worker,
  40. e.g. the HTTP listener that it provides (if any); logging configuration; etc.
  41. You should minimise the number of overrides though to maintain a usable config.
  42. You must specify the type of worker application (worker_app) and the replication
  43. endpoint that it's talking to on the main synapse process (worker_replication_url).
  44. For instance::
  45. worker_app: synapse.app.synchrotron
  46. # The replication listener on the synapse to talk to.
  47. worker_replication_url: http://127.0.0.1:9092/_synapse/replication
  48. worker_listeners:
  49. - type: http
  50. port: 8083
  51. resources:
  52. - names:
  53. - client
  54. worker_daemonize: True
  55. worker_pid_file: /home/matrix/synapse/synchrotron.pid
  56. worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
  57. ...is a full configuration for a synchrotron worker instance, which will expose a
  58. plain HTTP /sync endpoint on port 8083 separately from the /sync endpoint provided
  59. by the main synapse.
  60. Obviously you should configure your loadbalancer to route the /sync endpoint to
  61. the synchrotron instance(s) in this instance.
  62. Finally, to actually run your worker-based synapse, you must pass synctl the -a
  63. commandline option to tell it to operate on all the worker configurations found
  64. in the given directory, e.g.::
  65. synctl -a $CONFIG/workers start
  66. Currently one should always restart all workers when restarting or upgrading
  67. synapse, unless you explicitly know it's safe not to. For instance, restarting
  68. synapse without restarting all the synchrotrons may result in broken typing
  69. notifications.
  70. To manipulate a specific worker, you pass the -w option to synctl::
  71. synctl -w $CONFIG/workers/synchrotron.yaml restart
  72. All of the above is highly experimental and subject to change as Synapse evolves,
  73. but documenting it here to help folks needing highly scalable Synapses similar
  74. to the one running matrix.org!