1
0
Эх сурвалжийг харах

Merge tag 'v0.31.0'

Changes in synapse v0.31.0 (2018-06-06)
======================================

Most notable change from v0.30.0 is to switch to python prometheus library to improve system
stats reporting. WARNING this changes a number of prometheus metrics in a
backwards-incompatible manner. For more details, see
`docs/metrics-howto.rst <docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310>`_.

Bug Fixes:

* Fix metric documentation tables (PR #3341)
* Fix LaterGuage error handling (694968f)
* Fix replication metrics (b7e7fd2)

Changes in synapse v0.31.0-rc1 (2018-06-04)
==========================================

Features:

* Switch to the Python Prometheus library (PR #3256, #3274)
* Let users leave the server notice room after joining (PR #3287)

Changes:

* daily user type phone home stats (PR #3264)
* Use iter* methods for _filter_events_for_server (PR #3267)
* Docs on consent bits (PR #3268)
* Remove users from user directory on deactivate (PR #3277)
* Avoid sending consent notice to guest users (PR #3288)
* disable CPUMetrics if no /proc/self/stat (PR #3299)
* Add local and loopback IPv6 addresses to url_preview_ip_range_blacklist (PR #3312) Thanks to @thegcat!
* Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (PR #3307)
* Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
* Reduce stuck read-receipts: ignore depth when updating (PR #3318)
* Put python's logs into Trial when running unit tests (PR #3319)

Changes, python 3 migration:

* Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
* replace some iteritems with six (PR #3244) Thanks to @NotAFile!
* Add batch_iter to utils (PR #3245) Thanks to @NotAFile!
* use repr, not str (PR #3246) Thanks to @NotAFile!
* Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
* Py3 storage/_base.py (PR #3278) Thanks to @NotAFile!
* more six iteritems (PR #3279) Thanks to @NotAFile!
* More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
* remaining isintance fixes (PR #3281) Thanks to @NotAFile!
* py3-ize state.py (PR #3283) Thanks to @NotAFile!
* extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
* use memoryview in py3 (PR #3303) Thanks to @NotAFile!

Bugs:

* Fix federation backfill bugs (PR #3261)
* federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
Neil Johnson 6 жил өмнө
parent
commit
752b7b32ed
100 өөрчлөгдсөн 1305 нэмэгдсэн , 1330 устгасан
  1. 3 0
      .gitignore
  2. 58 0
      CHANGES.rst
  3. 1 1
      docs/code_style.rst
  4. 76 11
      docs/metrics-howto.rst
  5. 5 2
      docs/server_notices.md
  6. 1 1
      synapse/__init__.py
  7. 4 2
      synapse/api/auth.py
  8. 1 1
      synapse/api/filtering.py
  9. 13 0
      synapse/app/_base.py
  10. 7 0
      synapse/app/appservice.py
  11. 9 2
      synapse/app/client_reader.py
  12. 9 1
      synapse/app/event_creator.py
  13. 9 1
      synapse/app/federation_reader.py
  14. 9 1
      synapse/app/federation_sender.py
  15. 9 1
      synapse/app/frontend_proxy.py
  16. 13 4
      synapse/app/homeserver.py
  17. 9 1
      synapse/app/media_repository.py
  18. 9 1
      synapse/app/pusher.py
  19. 9 1
      synapse/app/synchrotron.py
  20. 9 1
      synapse/app/user_dir.py
  21. 7 1
      synapse/config/consent_config.py
  22. 3 0
      synapse/config/repository.py
  23. 10 0
      synapse/config/server.py
  24. 2 2
      synapse/event_auth.py
  25. 1 1
      synapse/events/__init__.py
  26. 3 1
      synapse/events/utils.py
  27. 4 2
      synapse/events/validator.py
  28. 9 12
      synapse/federation/federation_client.py
  29. 10 9
      synapse/federation/federation_server.py
  30. 4 9
      synapse/federation/send_queue.py
  31. 34 29
      synapse/federation/transaction_queue.py
  32. 3 1
      synapse/groups/groups_server.py
  33. 2 2
      synapse/handlers/_base.py
  34. 12 14
      synapse/handlers/appservice.py
  35. 3 3
      synapse/handlers/auth.py
  36. 4 0
      synapse/handlers/deactivate_account.py
  37. 10 8
      synapse/handlers/device.py
  38. 7 6
      synapse/handlers/e2e_keys.py
  39. 37 24
      synapse/handlers/federation.py
  40. 2 1
      synapse/handlers/groups_local.py
  41. 8 4
      synapse/handlers/message.py
  42. 46 40
      synapse/handlers/presence.py
  43. 1 1
      synapse/handlers/room.py
  44. 2 1
      synapse/handlers/room_list.py
  45. 14 10
      synapse/handlers/room_member.py
  46. 1 1
      synapse/handlers/search.py
  47. 22 10
      synapse/handlers/sync.py
  48. 9 1
      synapse/handlers/user_directory.py
  49. 7 14
      synapse/http/client.py
  50. 11 13
      synapse/http/matrixfederationclient.py
  51. 91 125
      synapse/http/request_metrics.py
  52. 2 2
      synapse/http/server.py
  53. 11 11
      synapse/http/site.py
  54. 127 114
      synapse/metrics/__init__.py
  55. 0 328
      synapse/metrics/metric.py
  56. 0 122
      synapse/metrics/process_collector.py
  57. 2 21
      synapse/metrics/resource.py
  58. 11 13
      synapse/notifier.py
  59. 1 1
      synapse/push/baserules.py
  60. 20 23
      synapse/push/bulk_push_rule_evaluator.py
  61. 4 9
      synapse/push/httppusher.py
  62. 2 1
      synapse/push/mailer.py
  63. 1 1
      synapse/push/presentable_names.py
  64. 4 2
      synapse/push/push_rule_evaluator.py
  65. 1 0
      synapse/python_dependencies.py
  66. 42 59
      synapse/replication/tcp/protocol.py
  67. 19 18
      synapse/replication/tcp/resource.py
  68. 1 1
      synapse/rest/client/transactions.py
  69. 5 3
      synapse/rest/client/v1/presence.py
  70. 2 1
      synapse/rest/media/v1/media_repository.py
  71. 5 3
      synapse/rest/media/v1/preview_url_resource.py
  72. 5 0
      synapse/server_notices/consent_server_notices.py
  73. 27 24
      synapse/state.py
  74. 46 38
      synapse/storage/_base.py
  75. 4 2
      synapse/storage/client_ips.py
  76. 5 4
      synapse/storage/devices.py
  77. 4 2
      synapse/storage/end_to_end_keys.py
  78. 3 1
      synapse/storage/event_push_actions.py
  79. 39 39
      synapse/storage/events.py
  80. 1 1
      synapse/storage/events_worker.py
  81. 1 1
      synapse/storage/filtering.py
  82. 12 4
      synapse/storage/keys.py
  83. 1 1
      synapse/storage/prepare_database.py
  84. 2 5
      synapse/storage/presence.py
  85. 30 29
      synapse/storage/receipts.py
  86. 37 0
      synapse/storage/registration.py
  87. 6 4
      synapse/storage/roommember.py
  88. 19 0
      synapse/storage/schema/delta/50/add_creation_ts_users_index.sql
  89. 5 4
      synapse/storage/search.py
  90. 10 2
      synapse/storage/signatures.py
  91. 25 22
      synapse/storage/state.py
  92. 9 1
      synapse/storage/transactions.py
  93. 5 3
      synapse/storage/user_directory.py
  94. 18 0
      synapse/util/__init__.py
  95. 69 18
      synapse/util/caches/__init__.py
  96. 10 6
      synapse/util/caches/descriptors.py
  97. 1 1
      synapse/util/caches/dictionary_cache.py
  98. 2 2
      synapse/util/caches/expiringcache.py
  99. 6 5
      synapse/util/caches/response_cache.py
  100. 1 1
      synapse/util/caches/stream_change_cache.py

+ 3 - 0
.gitignore

@@ -1,5 +1,6 @@
 *.pyc
 *.pyc
 .*.swp
 .*.swp
+*~
 
 
 .DS_Store
 .DS_Store
 _trial_temp/
 _trial_temp/
@@ -13,6 +14,7 @@ docs/build/
 cmdclient_config.json
 cmdclient_config.json
 homeserver*.db
 homeserver*.db
 homeserver*.log
 homeserver*.log
+homeserver*.log.*
 homeserver*.pid
 homeserver*.pid
 homeserver*.yaml
 homeserver*.yaml
 
 
@@ -40,6 +42,7 @@ media_store/
 *.tac
 *.tac
 
 
 build/
 build/
+venv/
 
 
 localhost-800*/
 localhost-800*/
 static/client/register/register_config.js
 static/client/register/register_config.js

+ 58 - 0
CHANGES.rst

@@ -1,3 +1,61 @@
+Changes in synapse v0.31.0 (2018-06-06)
+=======================================
+
+Most notable change from v0.30.0 is to switch to python prometheus library to improve system
+stats reporting. WARNING this changes a number of prometheus metrics in a
+backwards-incompatible manner. For more details, see
+`docs/metrics-howto.rst <docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310>`_.
+
+Bug Fixes:
+
+* Fix metric documentation tables (PR #3341)
+* Fix LaterGuage error handling (694968f)
+* Fix replication metrics (b7e7fd2)
+
+Changes in synapse v0.31.0-rc1 (2018-06-04)
+==========================================
+
+Features:
+
+* Switch to the Python Prometheus library (PR #3256, #3274)
+* Let users leave the server notice room after joining (PR #3287)
+
+
+Changes:
+
+* daily user type phone home stats (PR #3264)
+* Use iter* methods for _filter_events_for_server (PR #3267)
+* Docs on consent bits (PR #3268)
+* Remove users from user directory on deactivate (PR #3277)
+* Avoid sending consent notice to guest users (PR #3288)
+* disable CPUMetrics if no /proc/self/stat (PR #3299)
+* Add local and loopback IPv6 addresses to url_preview_ip_range_blacklist (PR #3312) Thanks to @thegcat!
+* Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (PR #3307)
+* Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
+* Reduce stuck read-receipts: ignore depth when updating (PR #3318)
+* Put python's logs into Trial when running unit tests (PR #3319)
+
+Changes, python 3 migration:
+
+* Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
+* replace some iteritems with six (PR #3244) Thanks to @NotAFile!
+* Add batch_iter to utils (PR #3245) Thanks to @NotAFile!
+* use repr, not str (PR #3246) Thanks to @NotAFile!
+* Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
+* Py3 storage/_base.py (PR #3278) Thanks to @NotAFile!
+* more six iteritems (PR #3279) Thanks to @NotAFile!
+* More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
+* remaining isintance fixes (PR #3281) Thanks to @NotAFile!
+* py3-ize state.py (PR #3283) Thanks to @NotAFile!
+* extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
+* use memoryview in py3 (PR #3303) Thanks to @NotAFile!
+
+Bugs:
+
+* Fix federation backfill bugs (PR #3261)
+* federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
+
+
 Changes in synapse v0.30.0 (2018-05-24)
 Changes in synapse v0.30.0 (2018-05-24)
 ==========================================
 ==========================================
 
 

+ 1 - 1
docs/code_style.rst

@@ -16,7 +16,7 @@
       print("I am a fish %s" %
       print("I am a fish %s" %
             "moo")
             "moo")
 
 
-      and this::
+    and this::
 
 
         print(
         print(
             "I am a fish %s" %
             "I am a fish %s" %

+ 76 - 11
docs/metrics-howto.rst

@@ -1,25 +1,47 @@
 How to monitor Synapse metrics using Prometheus
 How to monitor Synapse metrics using Prometheus
 ===============================================
 ===============================================
 
 
-1. Install prometheus:
+1. Install Prometheus:
 
 
    Follow instructions at http://prometheus.io/docs/introduction/install/
    Follow instructions at http://prometheus.io/docs/introduction/install/
 
 
-2. Enable synapse metrics:
+2. Enable Synapse metrics:
 
 
-   Simply setting a (local) port number will enable it. Pick a port.
-   prometheus itself defaults to 9090, so starting just above that for
-   locally monitored services seems reasonable. E.g. 9092:
+   There are two methods of enabling metrics in Synapse.
 
 
-   Add to homeserver.yaml::
+   The first serves the metrics as a part of the usual web server and can be
+   enabled by adding the "metrics" resource to the existing listener as such::
 
 
-     metrics_port: 9092
+     resources:
+       - names:
+         - client
+         - metrics
 
 
-   Also ensure that ``enable_metrics`` is set to ``True``.
+   This provides a simple way of adding metrics to your Synapse installation,
+   and serves under ``/_synapse/metrics``. If you do not wish your metrics be
+   publicly exposed, you will need to either filter it out at your load
+   balancer, or use the second method.
 
 
-   Restart synapse.
+   The second method runs the metrics server on a different port, in a
+   different thread to Synapse. This can make it more resilient to heavy load
+   meaning metrics cannot be retrieved, and can be exposed to just internal
+   networks easier. The served metrics are available over HTTP only, and will
+   be available at ``/``.
 
 
-3. Add a prometheus target for synapse.
+   Add a new listener to homeserver.yaml::
+
+     listeners:
+       - type: metrics
+         port: 9000
+         bind_addresses:
+           - '0.0.0.0'
+
+   For both options, you will need to ensure that ``enable_metrics`` is set to
+   ``True``.
+
+   Restart Synapse.
+
+3. Add a Prometheus target for Synapse.
 
 
    It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
    It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
 
 
@@ -31,7 +53,50 @@ How to monitor Synapse metrics using Prometheus
    If your prometheus is older than 1.5.2, you will need to replace
    If your prometheus is older than 1.5.2, you will need to replace
    ``static_configs`` in the above with ``target_groups``.
    ``static_configs`` in the above with ``target_groups``.
 
 
-   Restart prometheus.
+   Restart Prometheus.
+
+
+Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
+---------------------------------------------------------------------------------
+
+The duplicated metrics deprecated in Synapse 0.27.0 have been removed.
+
+All time duration-based metrics have been changed to be seconds. This affects:
+
++----------------------------------+
+| msec -> sec metrics              |
++==================================+
+| python_gc_time                   |
++----------------------------------+
+| python_twisted_reactor_tick_time |
++----------------------------------+
+| synapse_storage_query_time       |
++----------------------------------+
+| synapse_storage_schedule_time    |
++----------------------------------+
+| synapse_storage_transaction_time |
++----------------------------------+
+
+Several metrics have been changed to be histograms, which sort entries into
+buckets and allow better analysis. The following metrics are now histograms:
+
++-------------------------------------------+
+| Altered metrics                           |
++===========================================+
+| python_gc_time                            |
++-------------------------------------------+
+| python_twisted_reactor_pending_calls      |
++-------------------------------------------+
+| python_twisted_reactor_tick_time          |
++-------------------------------------------+
+| synapse_http_server_response_time_seconds |
++-------------------------------------------+
+| synapse_storage_query_time                |
++-------------------------------------------+
+| synapse_storage_schedule_time             |
++-------------------------------------------+
+| synapse_storage_transaction_time          |
++-------------------------------------------+
 
 
 
 
 Block and response metrics renamed for 0.27.0
 Block and response metrics renamed for 0.27.0

+ 5 - 2
docs/server_notices.md

@@ -5,7 +5,7 @@ Server Notices
 channel whereby server administrators can send messages to users on the server.
 channel whereby server administrators can send messages to users on the server.
 
 
 They are used as part of communication of the server polices(see
 They are used as part of communication of the server polices(see
-[consent_tracking.md](consent_tracking.md)), however the intention is that 
+[consent_tracking.md](consent_tracking.md)), however the intention is that
 they may also find a use for features such as "Message of the day".
 they may also find a use for features such as "Message of the day".
 
 
 This is a feature specific to Synapse, but it uses standard Matrix
 This is a feature specific to Synapse, but it uses standard Matrix
@@ -24,7 +24,10 @@ history; it will appear to have come from the 'server notices user' (see
 below).
 below).
 
 
 The user is prevented from sending any messages in this room by the power
 The user is prevented from sending any messages in this room by the power
-levels. They also cannot leave it.
+levels.
+
+Having joined the room, the user can leave the room if they want. Subsequent
+server notices will then cause a new room to be created.
 
 
 Synapse configuration
 Synapse configuration
 ---------------------
 ---------------------

+ 1 - 1
synapse/__init__.py

@@ -16,4 +16,4 @@
 """ This is a reference implementation of a Matrix home server.
 """ This is a reference implementation of a Matrix home server.
 """
 """
 
 
-__version__ = "0.30.0"
+__version__ = "0.31.0"

+ 4 - 2
synapse/api/auth.py

@@ -15,6 +15,8 @@
 
 
 import logging
 import logging
 
 
+from six import itervalues
+
 import pymacaroons
 import pymacaroons
 from twisted.internet import defer
 from twisted.internet import defer
 
 
@@ -57,7 +59,7 @@ class Auth(object):
         self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
         self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
 
 
         self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
         self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
-        register_cache("token_cache", self.token_cache)
+        register_cache("cache", "token_cache", self.token_cache)
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
     def check_from_context(self, event, context, do_sig_check=True):
     def check_from_context(self, event, context, do_sig_check=True):
@@ -66,7 +68,7 @@ class Auth(object):
         )
         )
         auth_events = yield self.store.get_events(auth_events_ids)
         auth_events = yield self.store.get_events(auth_events_ids)
         auth_events = {
         auth_events = {
-            (e.type, e.state_key): e for e in auth_events.values()
+            (e.type, e.state_key): e for e in itervalues(auth_events)
         }
         }
         self.check(event, auth_events=auth_events, do_sig_check=do_sig_check)
         self.check(event, auth_events=auth_events, do_sig_check=do_sig_check)
 
 

+ 1 - 1
synapse/api/filtering.py

@@ -411,7 +411,7 @@ class Filter(object):
         return room_ids
         return room_ids
 
 
     def filter(self, events):
     def filter(self, events):
-        return filter(self.check, events)
+        return list(filter(self.check, events))
 
 
     def limit(self):
     def limit(self):
         return self.filter_json.get("limit", 10)
         return self.filter_json.get("limit", 10)

+ 13 - 0
synapse/app/_base.py

@@ -124,6 +124,19 @@ def quit_with_error(error_string):
     sys.exit(1)
     sys.exit(1)
 
 
 
 
+def listen_metrics(bind_addresses, port):
+    """
+    Start Prometheus metrics server.
+    """
+    from synapse.metrics import RegistryProxy
+    from prometheus_client import start_http_server
+
+    for host in bind_addresses:
+        reactor.callInThread(start_http_server, int(port),
+                             addr=host, registry=RegistryProxy)
+        logger.info("Metrics now reporting on %s:%d", host, port)
+
+
 def listen_tcp(bind_addresses, port, factory, backlog=50):
 def listen_tcp(bind_addresses, port, factory, backlog=50):
     """
     """
     Create a TCP socket for a port and several addresses
     Create a TCP socket for a port and several addresses

+ 7 - 0
synapse/app/appservice.py

@@ -94,6 +94,13 @@ class AppserviceServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 9 - 2
synapse/app/client_reader.py

@@ -25,6 +25,7 @@ from synapse.config.logger import setup_logging
 from synapse.crypto import context_factory
 from synapse.crypto import context_factory
 from synapse.http.server import JsonResource
 from synapse.http.server import JsonResource
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -77,7 +78,7 @@ class ClientReaderServer(HomeServer):
         for res in listener_config["resources"]:
         for res in listener_config["resources"]:
             for name in res["names"]:
             for name in res["names"]:
                 if name == "metrics":
                 if name == "metrics":
-                    resources[METRICS_PREFIX] = MetricsResource(self)
+                    resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
                 elif name == "client":
                 elif name == "client":
                     resource = JsonResource(self, canonical_json=False)
                     resource = JsonResource(self, canonical_json=False)
                     PublicRoomListRestServlet(self).register(resource)
                     PublicRoomListRestServlet(self).register(resource)
@@ -118,7 +119,13 @@ class ClientReaderServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
-
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 9 - 1
synapse/app/event_creator.py

@@ -25,6 +25,7 @@ from synapse.config.logger import setup_logging
 from synapse.crypto import context_factory
 from synapse.crypto import context_factory
 from synapse.http.server import JsonResource
 from synapse.http.server import JsonResource
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@@ -90,7 +91,7 @@ class EventCreatorServer(HomeServer):
         for res in listener_config["resources"]:
         for res in listener_config["resources"]:
             for name in res["names"]:
             for name in res["names"]:
                 if name == "metrics":
                 if name == "metrics":
-                    resources[METRICS_PREFIX] = MetricsResource(self)
+                    resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
                 elif name == "client":
                 elif name == "client":
                     resource = JsonResource(self, canonical_json=False)
                     resource = JsonResource(self, canonical_json=False)
                     RoomSendEventRestServlet(self).register(resource)
                     RoomSendEventRestServlet(self).register(resource)
@@ -134,6 +135,13 @@ class EventCreatorServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 9 - 1
synapse/app/federation_reader.py

@@ -26,6 +26,7 @@ from synapse.config.logger import setup_logging
 from synapse.crypto import context_factory
 from synapse.crypto import context_factory
 from synapse.federation.transport.server import TransportLayerServer
 from synapse.federation.transport.server import TransportLayerServer
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage.directory import DirectoryStore
 from synapse.replication.slave.storage.directory import DirectoryStore
@@ -71,7 +72,7 @@ class FederationReaderServer(HomeServer):
         for res in listener_config["resources"]:
         for res in listener_config["resources"]:
             for name in res["names"]:
             for name in res["names"]:
                 if name == "metrics":
                 if name == "metrics":
-                    resources[METRICS_PREFIX] = MetricsResource(self)
+                    resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
                 elif name == "federation":
                 elif name == "federation":
                     resources.update({
                     resources.update({
                         FEDERATION_PREFIX: TransportLayerServer(self),
                         FEDERATION_PREFIX: TransportLayerServer(self),
@@ -107,6 +108,13 @@ class FederationReaderServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 9 - 1
synapse/app/federation_sender.py

@@ -25,6 +25,7 @@ from synapse.config.logger import setup_logging
 from synapse.crypto import context_factory
 from synapse.crypto import context_factory
 from synapse.federation import send_queue
 from synapse.federation import send_queue
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
 from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
 from synapse.replication.slave.storage.devices import SlavedDeviceStore
 from synapse.replication.slave.storage.devices import SlavedDeviceStore
@@ -89,7 +90,7 @@ class FederationSenderServer(HomeServer):
         for res in listener_config["resources"]:
         for res in listener_config["resources"]:
             for name in res["names"]:
             for name in res["names"]:
                 if name == "metrics":
                 if name == "metrics":
-                    resources[METRICS_PREFIX] = MetricsResource(self)
+                    resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
 
 
         root_resource = create_resource_tree(resources, NoResource())
         root_resource = create_resource_tree(resources, NoResource())
 
 
@@ -121,6 +122,13 @@ class FederationSenderServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 9 - 1
synapse/app/frontend_proxy.py

@@ -29,6 +29,7 @@ from synapse.http.servlet import (
     RestServlet, parse_json_object_from_request,
     RestServlet, parse_json_object_from_request,
 )
 )
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -131,7 +132,7 @@ class FrontendProxyServer(HomeServer):
         for res in listener_config["resources"]:
         for res in listener_config["resources"]:
             for name in res["names"]:
             for name in res["names"]:
                 if name == "metrics":
                 if name == "metrics":
-                    resources[METRICS_PREFIX] = MetricsResource(self)
+                    resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
                 elif name == "client":
                 elif name == "client":
                     resource = JsonResource(self, canonical_json=False)
                     resource = JsonResource(self, canonical_json=False)
                     KeyUploadServlet(self).register(resource)
                     KeyUploadServlet(self).register(resource)
@@ -172,6 +173,13 @@ class FrontendProxyServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 13 - 4
synapse/app/homeserver.py

@@ -34,7 +34,7 @@ from synapse.module_api import ModuleApi
 from synapse.http.additional_resource import AdditionalResource
 from synapse.http.additional_resource import AdditionalResource
 from synapse.http.server import RootRedirect
 from synapse.http.server import RootRedirect
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
-from synapse.metrics import register_memory_metrics
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
 from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
     check_requirements
     check_requirements
@@ -230,7 +230,7 @@ class SynapseHomeServer(HomeServer):
             resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
             resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
 
 
         if name == "metrics" and self.get_config().enable_metrics:
         if name == "metrics" and self.get_config().enable_metrics:
-            resources[METRICS_PREFIX] = MetricsResource(self)
+            resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
 
 
         if name == "replication":
         if name == "replication":
             resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
             resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
@@ -263,6 +263,13 @@ class SynapseHomeServer(HomeServer):
                     reactor.addSystemEventTrigger(
                     reactor.addSystemEventTrigger(
                         "before", "shutdown", server_listener.stopListening,
                         "before", "shutdown", server_listener.stopListening,
                     )
                     )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 
@@ -362,8 +369,6 @@ def setup(config_options):
         hs.get_datastore().start_doing_background_updates()
         hs.get_datastore().start_doing_background_updates()
         hs.get_federation_client().start_get_pdu_cache()
         hs.get_federation_client().start_get_pdu_cache()
 
 
-        register_memory_metrics(hs)
-
     reactor.callWhenRunning(start)
     reactor.callWhenRunning(start)
 
 
     return hs
     return hs
@@ -434,6 +439,10 @@ def run(hs):
         total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
         total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
         stats["total_nonbridged_users"] = total_nonbridged_users
         stats["total_nonbridged_users"] = total_nonbridged_users
 
 
+        daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
+        for name, count in daily_user_type_results.iteritems():
+            stats["daily_user_type_" + name] = count
+
         room_count = yield hs.get_datastore().get_room_count()
         room_count = yield hs.get_datastore().get_room_count()
         stats["total_room_count"] = room_count
         stats["total_room_count"] = room_count
 
 

+ 9 - 1
synapse/app/media_repository.py

@@ -27,6 +27,7 @@ from synapse.config.homeserver import HomeServerConfig
 from synapse.config.logger import setup_logging
 from synapse.config.logger import setup_logging
 from synapse.crypto import context_factory
 from synapse.crypto import context_factory
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -73,7 +74,7 @@ class MediaRepositoryServer(HomeServer):
         for res in listener_config["resources"]:
         for res in listener_config["resources"]:
             for name in res["names"]:
             for name in res["names"]:
                 if name == "metrics":
                 if name == "metrics":
-                    resources[METRICS_PREFIX] = MetricsResource(self)
+                    resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
                 elif name == "media":
                 elif name == "media":
                     media_repo = self.get_media_repository_resource()
                     media_repo = self.get_media_repository_resource()
                     resources.update({
                     resources.update({
@@ -114,6 +115,13 @@ class MediaRepositoryServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 9 - 1
synapse/app/pusher.py

@@ -23,6 +23,7 @@ from synapse.config._base import ConfigError
 from synapse.config.homeserver import HomeServerConfig
 from synapse.config.homeserver import HomeServerConfig
 from synapse.config.logger import setup_logging
 from synapse.config.logger import setup_logging
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
 from synapse.replication.slave.storage.events import SlavedEventStore
 from synapse.replication.slave.storage.events import SlavedEventStore
@@ -92,7 +93,7 @@ class PusherServer(HomeServer):
         for res in listener_config["resources"]:
         for res in listener_config["resources"]:
             for name in res["names"]:
             for name in res["names"]:
                 if name == "metrics":
                 if name == "metrics":
-                    resources[METRICS_PREFIX] = MetricsResource(self)
+                    resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
 
 
         root_resource = create_resource_tree(resources, NoResource())
         root_resource = create_resource_tree(resources, NoResource())
 
 
@@ -124,6 +125,13 @@ class PusherServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 9 - 1
synapse/app/synchrotron.py

@@ -26,6 +26,7 @@ from synapse.config.logger import setup_logging
 from synapse.handlers.presence import PresenceHandler, get_interested_parties
 from synapse.handlers.presence import PresenceHandler, get_interested_parties
 from synapse.http.server import JsonResource
 from synapse.http.server import JsonResource
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
 from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@@ -257,7 +258,7 @@ class SynchrotronServer(HomeServer):
         for res in listener_config["resources"]:
         for res in listener_config["resources"]:
             for name in res["names"]:
             for name in res["names"]:
                 if name == "metrics":
                 if name == "metrics":
-                    resources[METRICS_PREFIX] = MetricsResource(self)
+                    resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
                 elif name == "client":
                 elif name == "client":
                     resource = JsonResource(self, canonical_json=False)
                     resource = JsonResource(self, canonical_json=False)
                     sync.register_servlets(self, resource)
                     sync.register_servlets(self, resource)
@@ -301,6 +302,13 @@ class SynchrotronServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 9 - 1
synapse/app/user_dir.py

@@ -26,6 +26,7 @@ from synapse.config.logger import setup_logging
 from synapse.crypto import context_factory
 from synapse.crypto import context_factory
 from synapse.http.server import JsonResource
 from synapse.http.server import JsonResource
 from synapse.http.site import SynapseSite
 from synapse.http.site import SynapseSite
+from synapse.metrics import RegistryProxy
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage._base import BaseSlavedStore
 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
 from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -105,7 +106,7 @@ class UserDirectoryServer(HomeServer):
         for res in listener_config["resources"]:
         for res in listener_config["resources"]:
             for name in res["names"]:
             for name in res["names"]:
                 if name == "metrics":
                 if name == "metrics":
-                    resources[METRICS_PREFIX] = MetricsResource(self)
+                    resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
                 elif name == "client":
                 elif name == "client":
                     resource = JsonResource(self, canonical_json=False)
                     resource = JsonResource(self, canonical_json=False)
                     user_directory.register_servlets(self, resource)
                     user_directory.register_servlets(self, resource)
@@ -146,6 +147,13 @@ class UserDirectoryServer(HomeServer):
                         globals={"hs": self},
                         globals={"hs": self},
                     )
                     )
                 )
                 )
+            elif listener["type"] == "metrics":
+                if not self.get_config().enable_metrics:
+                    logger.warn(("Metrics listener configured, but "
+                                 "collect_metrics is not enabled!"))
+                else:
+                    _base.listen_metrics(listener["bind_addresses"],
+                                         listener["port"])
             else:
             else:
                 logger.warn("Unrecognized listener type: %s", listener["type"])
                 logger.warn("Unrecognized listener type: %s", listener["type"])
 
 

+ 7 - 1
synapse/config/consent_config.py

@@ -35,7 +35,8 @@ DEFAULT_CONFIG = """\
 #
 #
 # 'server_notice_content', if enabled, will send a user a "Server Notice"
 # 'server_notice_content', if enabled, will send a user a "Server Notice"
 # asking them to consent to the privacy policy. The 'server_notices' section
 # asking them to consent to the privacy policy. The 'server_notices' section
-# must also be configured for this to work.
+# must also be configured for this to work. Notices will *not* be sent to
+# guest users unless 'send_server_notice_to_guests' is set to true.
 #
 #
 # 'block_events_error', if set, will block any attempts to send events
 # 'block_events_error', if set, will block any attempts to send events
 # until the user consents to the privacy policy. The value of the setting is
 # until the user consents to the privacy policy. The value of the setting is
@@ -49,6 +50,7 @@ DEFAULT_CONFIG = """\
 #     body: >-
 #     body: >-
 #       To continue using this homeserver you must review and agree to the
 #       To continue using this homeserver you must review and agree to the
 #       terms and conditions at %(consent_uri)s
 #       terms and conditions at %(consent_uri)s
+#   send_server_notice_to_guests: True
 #   block_events_error: >-
 #   block_events_error: >-
 #     To continue using this homeserver you must review and agree to the
 #     To continue using this homeserver you must review and agree to the
 #     terms and conditions at %(consent_uri)s
 #     terms and conditions at %(consent_uri)s
@@ -63,6 +65,7 @@ class ConsentConfig(Config):
         self.user_consent_version = None
         self.user_consent_version = None
         self.user_consent_template_dir = None
         self.user_consent_template_dir = None
         self.user_consent_server_notice_content = None
         self.user_consent_server_notice_content = None
+        self.user_consent_server_notice_to_guests = False
         self.block_events_without_consent_error = None
         self.block_events_without_consent_error = None
 
 
     def read_config(self, config):
     def read_config(self, config):
@@ -77,6 +80,9 @@ class ConsentConfig(Config):
         self.block_events_without_consent_error = consent_config.get(
         self.block_events_without_consent_error = consent_config.get(
             "block_events_error",
             "block_events_error",
         )
         )
+        self.user_consent_server_notice_to_guests = bool(consent_config.get(
+            "send_server_notice_to_guests", False,
+        ))
 
 
     def default_config(self, **kwargs):
     def default_config(self, **kwargs):
         return DEFAULT_CONFIG
         return DEFAULT_CONFIG

+ 3 - 0
synapse/config/repository.py

@@ -250,6 +250,9 @@ class ContentRepositoryConfig(Config):
         # - '192.168.0.0/16'
         # - '192.168.0.0/16'
         # - '100.64.0.0/10'
         # - '100.64.0.0/10'
         # - '169.254.0.0/16'
         # - '169.254.0.0/16'
+        # - '::1/128'
+        # - 'fe80::/64'
+        # - 'fc00::/7'
         #
         #
         # List of IP address CIDR ranges that the URL preview spider is allowed
         # List of IP address CIDR ranges that the URL preview spider is allowed
         # to access even if they are specified in url_preview_ip_range_blacklist.
         # to access even if they are specified in url_preview_ip_range_blacklist.

+ 10 - 0
synapse/config/server.py

@@ -14,8 +14,12 @@
 # See the License for the specific language governing permissions and
 # See the License for the specific language governing permissions and
 # limitations under the License.
 # limitations under the License.
 
 
+import logging
+
 from ._base import Config, ConfigError
 from ._base import Config, ConfigError
 
 
+logger = logging.Logger(__name__)
+
 
 
 class ServerConfig(Config):
 class ServerConfig(Config):
 
 
@@ -138,6 +142,12 @@ class ServerConfig(Config):
 
 
         metrics_port = config.get("metrics_port")
         metrics_port = config.get("metrics_port")
         if metrics_port:
         if metrics_port:
+            logger.warn(
+                ("The metrics_port configuration option is deprecated in Synapse 0.31 "
+                 "in favour of a listener. Please see "
+                 "http://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst"
+                 " on how to configure the new listener."))
+
             self.listeners.append({
             self.listeners.append({
                 "port": metrics_port,
                 "port": metrics_port,
                 "bind_addresses": [config.get("metrics_bind_host", "127.0.0.1")],
                 "bind_addresses": [config.get("metrics_bind_host", "127.0.0.1")],

+ 2 - 2
synapse/event_auth.py

@@ -471,14 +471,14 @@ def _check_power_levels(event, auth_events):
     ]
     ]
 
 
     old_list = current_state.content.get("users", {})
     old_list = current_state.content.get("users", {})
-    for user in set(old_list.keys() + user_list.keys()):
+    for user in set(list(old_list) + list(user_list)):
         levels_to_check.append(
         levels_to_check.append(
             (user, "users")
             (user, "users")
         )
         )
 
 
     old_list = current_state.content.get("events", {})
     old_list = current_state.content.get("events", {})
     new_list = event.content.get("events", {})
     new_list = event.content.get("events", {})
-    for ev_id in set(old_list.keys() + new_list.keys()):
+    for ev_id in set(list(old_list) + list(new_list)):
         levels_to_check.append(
         levels_to_check.append(
             (ev_id, "events")
             (ev_id, "events")
         )
         )

+ 1 - 1
synapse/events/__init__.py

@@ -146,7 +146,7 @@ class EventBase(object):
         return field in self._event_dict
         return field in self._event_dict
 
 
     def items(self):
     def items(self):
-        return self._event_dict.items()
+        return list(self._event_dict.items())
 
 
 
 
 class FrozenEvent(EventBase):
 class FrozenEvent(EventBase):

+ 3 - 1
synapse/events/utils.py

@@ -20,6 +20,8 @@ from frozendict import frozendict
 
 
 import re
 import re
 
 
+from six import string_types
+
 # Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
 # Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
 # (?<!stuff) matches if the current position in the string is not preceded
 # (?<!stuff) matches if the current position in the string is not preceded
 # by a match for 'stuff'.
 # by a match for 'stuff'.
@@ -277,7 +279,7 @@ def serialize_event(e, time_now_ms, as_client_event=True,
 
 
     if only_event_fields:
     if only_event_fields:
         if (not isinstance(only_event_fields, list) or
         if (not isinstance(only_event_fields, list) or
-                not all(isinstance(f, basestring) for f in only_event_fields)):
+                not all(isinstance(f, string_types) for f in only_event_fields)):
             raise TypeError("only_event_fields must be a list of strings")
             raise TypeError("only_event_fields must be a list of strings")
         d = only_fields(d, only_event_fields)
         d = only_fields(d, only_event_fields)
 
 

+ 4 - 2
synapse/events/validator.py

@@ -17,6 +17,8 @@ from synapse.types import EventID, RoomID, UserID
 from synapse.api.errors import SynapseError
 from synapse.api.errors import SynapseError
 from synapse.api.constants import EventTypes, Membership
 from synapse.api.constants import EventTypes, Membership
 
 
+from six import string_types
+
 
 
 class EventValidator(object):
 class EventValidator(object):
 
 
@@ -49,7 +51,7 @@ class EventValidator(object):
             strings.append("state_key")
             strings.append("state_key")
 
 
         for s in strings:
         for s in strings:
-            if not isinstance(getattr(event, s), basestring):
+            if not isinstance(getattr(event, s), string_types):
                 raise SynapseError(400, "Not '%s' a string type" % (s,))
                 raise SynapseError(400, "Not '%s' a string type" % (s,))
 
 
         if event.type == EventTypes.Member:
         if event.type == EventTypes.Member:
@@ -88,5 +90,5 @@ class EventValidator(object):
         for s in keys:
         for s in keys:
             if s not in d:
             if s not in d:
                 raise SynapseError(400, "'%s' not in content" % (s,))
                 raise SynapseError(400, "'%s' not in content" % (s,))
-            if not isinstance(d[s], basestring):
+            if not isinstance(d[s], string_types):
                 raise SynapseError(400, "Not '%s' a string type" % (s,))
                 raise SynapseError(400, "Not '%s' a string type" % (s,))

+ 9 - 12
synapse/federation/federation_client.py

@@ -32,20 +32,17 @@ from synapse.federation.federation_base import (
     FederationBase,
     FederationBase,
     event_from_pdu_json,
     event_from_pdu_json,
 )
 )
-import synapse.metrics
 from synapse.util import logcontext, unwrapFirstError
 from synapse.util import logcontext, unwrapFirstError
 from synapse.util.caches.expiringcache import ExpiringCache
 from synapse.util.caches.expiringcache import ExpiringCache
 from synapse.util.logcontext import make_deferred_yieldable, run_in_background
 from synapse.util.logcontext import make_deferred_yieldable, run_in_background
 from synapse.util.logutils import log_function
 from synapse.util.logutils import log_function
 from synapse.util.retryutils import NotRetryingDestination
 from synapse.util.retryutils import NotRetryingDestination
 
 
-logger = logging.getLogger(__name__)
-
+from prometheus_client import Counter
 
 
-# synapse.federation.federation_client is a silly name
-metrics = synapse.metrics.get_metrics_for("synapse.federation.client")
+logger = logging.getLogger(__name__)
 
 
-sent_queries_counter = metrics.register_counter("sent_queries", labels=["type"])
+sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["type"])
 
 
 
 
 PDU_RETRY_TIME_MS = 1 * 60 * 1000
 PDU_RETRY_TIME_MS = 1 * 60 * 1000
@@ -108,7 +105,7 @@ class FederationClient(FederationBase):
             a Deferred which will eventually yield a JSON object from the
             a Deferred which will eventually yield a JSON object from the
             response
             response
         """
         """
-        sent_queries_counter.inc(query_type)
+        sent_queries_counter.labels(query_type).inc()
 
 
         return self.transport_layer.make_query(
         return self.transport_layer.make_query(
             destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail,
             destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail,
@@ -127,7 +124,7 @@ class FederationClient(FederationBase):
             a Deferred which will eventually yield a JSON object from the
             a Deferred which will eventually yield a JSON object from the
             response
             response
         """
         """
-        sent_queries_counter.inc("client_device_keys")
+        sent_queries_counter.labels("client_device_keys").inc()
         return self.transport_layer.query_client_keys(
         return self.transport_layer.query_client_keys(
             destination, content, timeout
             destination, content, timeout
         )
         )
@@ -137,7 +134,7 @@ class FederationClient(FederationBase):
         """Query the device keys for a list of user ids hosted on a remote
         """Query the device keys for a list of user ids hosted on a remote
         server.
         server.
         """
         """
-        sent_queries_counter.inc("user_devices")
+        sent_queries_counter.labels("user_devices").inc()
         return self.transport_layer.query_user_devices(
         return self.transport_layer.query_user_devices(
             destination, user_id, timeout
             destination, user_id, timeout
         )
         )
@@ -154,7 +151,7 @@ class FederationClient(FederationBase):
             a Deferred which will eventually yield a JSON object from the
             a Deferred which will eventually yield a JSON object from the
             response
             response
         """
         """
-        sent_queries_counter.inc("client_one_time_keys")
+        sent_queries_counter.labels("client_one_time_keys").inc()
         return self.transport_layer.claim_client_keys(
         return self.transport_layer.claim_client_keys(
             destination, content, timeout
             destination, content, timeout
         )
         )
@@ -394,7 +391,7 @@ class FederationClient(FederationBase):
         """
         """
         if return_local:
         if return_local:
             seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
             seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
-            signed_events = seen_events.values()
+            signed_events = list(seen_events.values())
         else:
         else:
             seen_events = yield self.store.have_seen_events(event_ids)
             seen_events = yield self.store.have_seen_events(event_ids)
             signed_events = []
             signed_events = []
@@ -592,7 +589,7 @@ class FederationClient(FederationBase):
                 }
                 }
 
 
                 valid_pdus = yield self._check_sigs_and_hash_and_fetch(
                 valid_pdus = yield self._check_sigs_and_hash_and_fetch(
-                    destination, pdus.values(),
+                    destination, list(pdus.values()),
                     outlier=True,
                     outlier=True,
                 )
                 )
 
 

+ 10 - 9
synapse/federation/federation_server.py

@@ -27,12 +27,13 @@ from synapse.federation.federation_base import (
 
 
 from synapse.federation.persistence import TransactionActions
 from synapse.federation.persistence import TransactionActions
 from synapse.federation.units import Edu, Transaction
 from synapse.federation.units import Edu, Transaction
-import synapse.metrics
 from synapse.types import get_domain_from_id
 from synapse.types import get_domain_from_id
 from synapse.util import async
 from synapse.util import async
 from synapse.util.caches.response_cache import ResponseCache
 from synapse.util.caches.response_cache import ResponseCache
 from synapse.util.logutils import log_function
 from synapse.util.logutils import log_function
 
 
+from prometheus_client import Counter
+
 from six import iteritems
 from six import iteritems
 
 
 # when processing incoming transactions, we try to handle multiple rooms in
 # when processing incoming transactions, we try to handle multiple rooms in
@@ -41,17 +42,17 @@ TRANSACTION_CONCURRENCY_LIMIT = 10
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
-# synapse.federation.federation_server is a silly name
-metrics = synapse.metrics.get_metrics_for("synapse.federation.server")
-
-received_pdus_counter = metrics.register_counter("received_pdus")
+received_pdus_counter = Counter("synapse_federation_server_received_pdus", "")
 
 
-received_edus_counter = metrics.register_counter("received_edus")
+received_edus_counter = Counter("synapse_federation_server_received_edus", "")
 
 
-received_queries_counter = metrics.register_counter("received_queries", labels=["type"])
+received_queries_counter = Counter(
+    "synapse_federation_server_received_queries", "", ["type"]
+)
 
 
 
 
 class FederationServer(FederationBase):
 class FederationServer(FederationBase):
+
     def __init__(self, hs):
     def __init__(self, hs):
         super(FederationServer, self).__init__(hs)
         super(FederationServer, self).__init__(hs)
 
 
@@ -131,7 +132,7 @@ class FederationServer(FederationBase):
 
 
         logger.debug("[%s] Transaction is new", transaction.transaction_id)
         logger.debug("[%s] Transaction is new", transaction.transaction_id)
 
 
-        received_pdus_counter.inc_by(len(transaction.pdus))
+        received_pdus_counter.inc(len(transaction.pdus))
 
 
         pdus_by_room = {}
         pdus_by_room = {}
 
 
@@ -292,7 +293,7 @@ class FederationServer(FederationBase):
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
     def on_query_request(self, query_type, args):
     def on_query_request(self, query_type, args):
-        received_queries_counter.inc(query_type)
+        received_queries_counter.labels(query_type).inc()
         resp = yield self.registry.on_query(query_type, args)
         resp = yield self.registry.on_query(query_type, args)
         defer.returnValue((200, resp))
         defer.returnValue((200, resp))
 
 

+ 4 - 9
synapse/federation/send_queue.py

@@ -33,7 +33,7 @@ from .units import Edu
 
 
 from synapse.storage.presence import UserPresenceState
 from synapse.storage.presence import UserPresenceState
 from synapse.util.metrics import Measure
 from synapse.util.metrics import Measure
-import synapse.metrics
+from synapse.metrics import LaterGauge
 
 
 from blist import sorteddict
 from blist import sorteddict
 from collections import namedtuple
 from collections import namedtuple
@@ -45,9 +45,6 @@ from six import itervalues, iteritems
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
-
-
 class FederationRemoteSendQueue(object):
 class FederationRemoteSendQueue(object):
     """A drop in replacement for TransactionQueue"""
     """A drop in replacement for TransactionQueue"""
 
 
@@ -77,10 +74,8 @@ class FederationRemoteSendQueue(object):
         # lambda binds to the queue rather than to the name of the queue which
         # lambda binds to the queue rather than to the name of the queue which
         # changes. ARGH.
         # changes. ARGH.
         def register(name, queue):
         def register(name, queue):
-            metrics.register_callback(
-                queue_name + "_size",
-                lambda: len(queue),
-            )
+            LaterGauge("synapse_federation_send_queue_%s_size" % (queue_name,),
+                       "", [], lambda: len(queue))
 
 
         for queue_name in [
         for queue_name in [
             "presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed",
             "presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed",
@@ -202,7 +197,7 @@ class FederationRemoteSendQueue(object):
 
 
         # We only want to send presence for our own users, so lets always just
         # We only want to send presence for our own users, so lets always just
         # filter here just in case.
         # filter here just in case.
-        local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
+        local_states = list(filter(lambda s: self.is_mine_id(s.user_id), states))
 
 
         self.presence_map.update({state.user_id: state for state in local_states})
         self.presence_map.update({state.user_id: state for state in local_states})
         self.presence_changed[pos] = [state.user_id for state in local_states]
         self.presence_changed[pos] = [state.user_id for state in local_states]

+ 34 - 29
synapse/federation/transaction_queue.py

@@ -26,23 +26,25 @@ from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
 from synapse.util.metrics import measure_func
 from synapse.util.metrics import measure_func
 from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
 from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
 import synapse.metrics
 import synapse.metrics
+from synapse.metrics import LaterGauge
+from synapse.metrics import (
+    sent_edus_counter,
+    sent_transactions_counter,
+    events_processed_counter,
+)
+
+from prometheus_client import Counter
+
+from six import itervalues
 
 
 import logging
 import logging
 
 
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
-
-client_metrics = synapse.metrics.get_metrics_for("synapse.federation.client")
-sent_pdus_destination_dist = client_metrics.register_distribution(
-    "sent_pdu_destinations"
+sent_pdus_destination_dist = Counter(
+    "synapse_federation_transaction_queue_sent_pdu_destinations", ""
 )
 )
-sent_edus_counter = client_metrics.register_counter("sent_edus")
-
-sent_transactions_counter = client_metrics.register_counter("sent_transactions")
-
-events_processed_counter = client_metrics.register_counter("events_processed")
 
 
 
 
 class TransactionQueue(object):
 class TransactionQueue(object):
@@ -69,8 +71,10 @@ class TransactionQueue(object):
         # done
         # done
         self.pending_transactions = {}
         self.pending_transactions = {}
 
 
-        metrics.register_callback(
-            "pending_destinations",
+        LaterGauge(
+            "synapse_federation_transaction_queue_pending_destinations",
+            "",
+            [],
             lambda: len(self.pending_transactions),
             lambda: len(self.pending_transactions),
         )
         )
 
 
@@ -94,12 +98,16 @@ class TransactionQueue(object):
         # Map of destination -> (edu_type, key) -> Edu
         # Map of destination -> (edu_type, key) -> Edu
         self.pending_edus_keyed_by_dest = edus_keyed = {}
         self.pending_edus_keyed_by_dest = edus_keyed = {}
 
 
-        metrics.register_callback(
-            "pending_pdus",
+        LaterGauge(
+            "synapse_federation_transaction_queue_pending_pdus",
+            "",
+            [],
             lambda: sum(map(len, pdus.values())),
             lambda: sum(map(len, pdus.values())),
         )
         )
-        metrics.register_callback(
-            "pending_edus",
+        LaterGauge(
+            "synapse_federation_transaction_queue_pending_edus",
+            "",
+            [],
             lambda: (
             lambda: (
                 sum(map(len, edus.values()))
                 sum(map(len, edus.values()))
                 + sum(map(len, presence.values()))
                 + sum(map(len, presence.values()))
@@ -228,7 +236,7 @@ class TransactionQueue(object):
                 yield logcontext.make_deferred_yieldable(defer.gatherResults(
                 yield logcontext.make_deferred_yieldable(defer.gatherResults(
                     [
                     [
                         logcontext.run_in_background(handle_room_events, evs)
                         logcontext.run_in_background(handle_room_events, evs)
-                        for evs in events_by_room.itervalues()
+                        for evs in itervalues(events_by_room)
                     ],
                     ],
                     consumeErrors=True
                     consumeErrors=True
                 ))
                 ))
@@ -241,18 +249,15 @@ class TransactionQueue(object):
                     now = self.clock.time_msec()
                     now = self.clock.time_msec()
                     ts = yield self.store.get_received_ts(events[-1].event_id)
                     ts = yield self.store.get_received_ts(events[-1].event_id)
 
 
-                    synapse.metrics.event_processing_lag.set(
-                        now - ts, "federation_sender",
-                    )
-                    synapse.metrics.event_processing_last_ts.set(
-                        ts, "federation_sender",
-                    )
+                    synapse.metrics.event_processing_lag.labels(
+                        "federation_sender").set(now - ts)
+                    synapse.metrics.event_processing_last_ts.labels(
+                        "federation_sender").set(ts)
 
 
-                events_processed_counter.inc_by(len(events))
+                events_processed_counter.inc(len(events))
 
 
-                synapse.metrics.event_processing_positions.set(
-                    next_token, "federation_sender",
-                )
+                synapse.metrics.event_processing_positions.labels(
+                    "federation_sender").set(next_token)
 
 
         finally:
         finally:
             self._is_processing = False
             self._is_processing = False
@@ -275,7 +280,7 @@ class TransactionQueue(object):
         if not destinations:
         if not destinations:
             return
             return
 
 
-        sent_pdus_destination_dist.inc_by(len(destinations))
+        sent_pdus_destination_dist.inc(len(destinations))
 
 
         for destination in destinations:
         for destination in destinations:
             self.pending_pdus_by_dest.setdefault(destination, []).append(
             self.pending_pdus_by_dest.setdefault(destination, []).append(
@@ -322,7 +327,7 @@ class TransactionQueue(object):
                 if not states_map:
                 if not states_map:
                     break
                     break
 
 
-                yield self._process_presence_inner(states_map.values())
+                yield self._process_presence_inner(list(states_map.values()))
         except Exception:
         except Exception:
             logger.exception("Error sending presence states to servers")
             logger.exception("Error sending presence states to servers")
         finally:
         finally:

+ 3 - 1
synapse/groups/groups_server.py

@@ -20,6 +20,8 @@ from synapse.api.errors import SynapseError
 from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
 from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
 from twisted.internet import defer
 from twisted.internet import defer
 
 
+from six import string_types
+
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -431,7 +433,7 @@ class GroupsServerHandler(object):
                         "long_description"):
                         "long_description"):
             if keyname in content:
             if keyname in content:
                 value = content[keyname]
                 value = content[keyname]
-                if not isinstance(value, basestring):
+                if not isinstance(value, string_types):
                     raise SynapseError(400, "%r value is not a string" % (keyname,))
                     raise SynapseError(400, "%r value is not a string" % (keyname,))
                 profile[keyname] = value
                 profile[keyname] = value
 
 

+ 2 - 2
synapse/handlers/_base.py

@@ -114,14 +114,14 @@ class BaseHandler(object):
             if guest_access != "can_join":
             if guest_access != "can_join":
                 if context:
                 if context:
                     current_state = yield self.store.get_events(
                     current_state = yield self.store.get_events(
-                        context.current_state_ids.values()
+                        list(context.current_state_ids.values())
                     )
                     )
                 else:
                 else:
                     current_state = yield self.state_handler.get_current_state(
                     current_state = yield self.state_handler.get_current_state(
                         event.room_id
                         event.room_id
                     )
                     )
 
 
-                current_state = current_state.values()
+                current_state = list(current_state.values())
 
 
                 logger.info("maybe_kick_guest_users %r", current_state)
                 logger.info("maybe_kick_guest_users %r", current_state)
                 yield self.kick_guest_users(current_state)
                 yield self.kick_guest_users(current_state)

+ 12 - 14
synapse/handlers/appservice.py

@@ -15,20 +15,21 @@
 
 
 from twisted.internet import defer
 from twisted.internet import defer
 
 
+from six import itervalues
+
 import synapse
 import synapse
 from synapse.api.constants import EventTypes
 from synapse.api.constants import EventTypes
 from synapse.util.metrics import Measure
 from synapse.util.metrics import Measure
 from synapse.util.logcontext import (
 from synapse.util.logcontext import (
     make_deferred_yieldable, run_in_background,
     make_deferred_yieldable, run_in_background,
 )
 )
+from prometheus_client import Counter
 
 
 import logging
 import logging
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
-
-events_processed_counter = metrics.register_counter("events_processed")
+events_processed_counter = Counter("synapse_handlers_appservice_events_processed", "")
 
 
 
 
 def log_failure(failure):
 def log_failure(failure):
@@ -120,7 +121,7 @@ class ApplicationServicesHandler(object):
 
 
                     yield make_deferred_yieldable(defer.gatherResults([
                     yield make_deferred_yieldable(defer.gatherResults([
                         run_in_background(handle_room_events, evs)
                         run_in_background(handle_room_events, evs)
-                        for evs in events_by_room.itervalues()
+                        for evs in itervalues(events_by_room)
                     ], consumeErrors=True))
                     ], consumeErrors=True))
 
 
                     yield self.store.set_appservice_last_pos(upper_bound)
                     yield self.store.set_appservice_last_pos(upper_bound)
@@ -128,18 +129,15 @@ class ApplicationServicesHandler(object):
                     now = self.clock.time_msec()
                     now = self.clock.time_msec()
                     ts = yield self.store.get_received_ts(events[-1].event_id)
                     ts = yield self.store.get_received_ts(events[-1].event_id)
 
 
-                    synapse.metrics.event_processing_positions.set(
-                        upper_bound, "appservice_sender",
-                    )
+                    synapse.metrics.event_processing_positions.labels(
+                        "appservice_sender").set(upper_bound)
 
 
-                    events_processed_counter.inc_by(len(events))
+                    events_processed_counter.inc(len(events))
 
 
-                    synapse.metrics.event_processing_lag.set(
-                        now - ts, "appservice_sender",
-                    )
-                    synapse.metrics.event_processing_last_ts.set(
-                        ts, "appservice_sender",
-                    )
+                    synapse.metrics.event_processing_lag.labels(
+                        "appservice_sender").set(now - ts)
+                    synapse.metrics.event_processing_last_ts.labels(
+                        "appservice_sender").set(ts)
             finally:
             finally:
                 self.is_processing = False
                 self.is_processing = False
 
 

+ 3 - 3
synapse/handlers/auth.py

@@ -249,7 +249,7 @@ class AuthHandler(BaseHandler):
                 errordict = e.error_dict()
                 errordict = e.error_dict()
 
 
         for f in flows:
         for f in flows:
-            if len(set(f) - set(creds.keys())) == 0:
+            if len(set(f) - set(creds)) == 0:
                 # it's very useful to know what args are stored, but this can
                 # it's very useful to know what args are stored, but this can
                 # include the password in the case of registering, so only log
                 # include the password in the case of registering, so only log
                 # the keys (confusingly, clientdict may contain a password
                 # the keys (confusingly, clientdict may contain a password
@@ -257,12 +257,12 @@ class AuthHandler(BaseHandler):
                 # and is not sensitive).
                 # and is not sensitive).
                 logger.info(
                 logger.info(
                     "Auth completed with creds: %r. Client dict has keys: %r",
                     "Auth completed with creds: %r. Client dict has keys: %r",
-                    creds, clientdict.keys()
+                    creds, list(clientdict)
                 )
                 )
                 defer.returnValue((creds, clientdict, session['id']))
                 defer.returnValue((creds, clientdict, session['id']))
 
 
         ret = self._auth_dict_for_flows(flows, session)
         ret = self._auth_dict_for_flows(flows, session)
-        ret['completed'] = creds.keys()
+        ret['completed'] = list(creds)
         ret.update(errordict)
         ret.update(errordict)
         raise InteractiveAuthIncompleteError(
         raise InteractiveAuthIncompleteError(
             ret,
             ret,

+ 4 - 0
synapse/handlers/deactivate_account.py

@@ -30,6 +30,7 @@ class DeactivateAccountHandler(BaseHandler):
         self._auth_handler = hs.get_auth_handler()
         self._auth_handler = hs.get_auth_handler()
         self._device_handler = hs.get_device_handler()
         self._device_handler = hs.get_device_handler()
         self._room_member_handler = hs.get_room_member_handler()
         self._room_member_handler = hs.get_room_member_handler()
+        self.user_directory_handler = hs.get_user_directory_handler()
 
 
         # Flag that indicates whether the process to part users from rooms is running
         # Flag that indicates whether the process to part users from rooms is running
         self._user_parter_running = False
         self._user_parter_running = False
@@ -65,6 +66,9 @@ class DeactivateAccountHandler(BaseHandler):
         # removal from all the rooms they're a member of)
         # removal from all the rooms they're a member of)
         yield self.store.add_user_pending_deactivation(user_id)
         yield self.store.add_user_pending_deactivation(user_id)
 
 
+        # delete from user directory
+        yield self.user_directory_handler.handle_user_deactivated(user_id)
+
         # Now start the process that goes through that list and
         # Now start the process that goes through that list and
         # parts users from rooms (if it isn't already running)
         # parts users from rooms (if it isn't already running)
         self._start_user_parting()
         self._start_user_parting()

+ 10 - 8
synapse/handlers/device.py

@@ -26,6 +26,8 @@ from ._base import BaseHandler
 
 
 import logging
 import logging
 
 
+from six import itervalues, iteritems
+
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -112,7 +114,7 @@ class DeviceHandler(BaseHandler):
             user_id, device_id=None
             user_id, device_id=None
         )
         )
 
 
-        devices = device_map.values()
+        devices = list(device_map.values())
         for device in devices:
         for device in devices:
             _update_device_from_client_ips(device, ips)
             _update_device_from_client_ips(device, ips)
 
 
@@ -185,7 +187,7 @@ class DeviceHandler(BaseHandler):
             defer.Deferred:
             defer.Deferred:
         """
         """
         device_map = yield self.store.get_devices_by_user(user_id)
         device_map = yield self.store.get_devices_by_user(user_id)
-        device_ids = device_map.keys()
+        device_ids = list(device_map)
         if except_device_id is not None:
         if except_device_id is not None:
             device_ids = [d for d in device_ids if d != except_device_id]
             device_ids = [d for d in device_ids if d != except_device_id]
         yield self.delete_devices(user_id, device_ids)
         yield self.delete_devices(user_id, device_ids)
@@ -318,7 +320,7 @@ class DeviceHandler(BaseHandler):
             # The user may have left the room
             # The user may have left the room
             # TODO: Check if they actually did or if we were just invited.
             # TODO: Check if they actually did or if we were just invited.
             if room_id not in room_ids:
             if room_id not in room_ids:
-                for key, event_id in current_state_ids.iteritems():
+                for key, event_id in iteritems(current_state_ids):
                     etype, state_key = key
                     etype, state_key = key
                     if etype != EventTypes.Member:
                     if etype != EventTypes.Member:
                         continue
                         continue
@@ -338,7 +340,7 @@ class DeviceHandler(BaseHandler):
             # special-case for an empty prev state: include all members
             # special-case for an empty prev state: include all members
             # in the changed list
             # in the changed list
             if not event_ids:
             if not event_ids:
-                for key, event_id in current_state_ids.iteritems():
+                for key, event_id in iteritems(current_state_ids):
                     etype, state_key = key
                     etype, state_key = key
                     if etype != EventTypes.Member:
                     if etype != EventTypes.Member:
                         continue
                         continue
@@ -354,10 +356,10 @@ class DeviceHandler(BaseHandler):
 
 
             # Check if we've joined the room? If so we just blindly add all the users to
             # Check if we've joined the room? If so we just blindly add all the users to
             # the "possibly changed" users.
             # the "possibly changed" users.
-            for state_dict in prev_state_ids.itervalues():
+            for state_dict in itervalues(prev_state_ids):
                 member_event = state_dict.get((EventTypes.Member, user_id), None)
                 member_event = state_dict.get((EventTypes.Member, user_id), None)
                 if not member_event or member_event != current_member_id:
                 if not member_event or member_event != current_member_id:
-                    for key, event_id in current_state_ids.iteritems():
+                    for key, event_id in iteritems(current_state_ids):
                         etype, state_key = key
                         etype, state_key = key
                         if etype != EventTypes.Member:
                         if etype != EventTypes.Member:
                             continue
                             continue
@@ -367,14 +369,14 @@ class DeviceHandler(BaseHandler):
             # If there has been any change in membership, include them in the
             # If there has been any change in membership, include them in the
             # possibly changed list. We'll check if they are joined below,
             # possibly changed list. We'll check if they are joined below,
             # and we're not toooo worried about spuriously adding users.
             # and we're not toooo worried about spuriously adding users.
-            for key, event_id in current_state_ids.iteritems():
+            for key, event_id in iteritems(current_state_ids):
                 etype, state_key = key
                 etype, state_key = key
                 if etype != EventTypes.Member:
                 if etype != EventTypes.Member:
                     continue
                     continue
 
 
                 # check if this member has changed since any of the extremities
                 # check if this member has changed since any of the extremities
                 # at the stream_ordering, and add them to the list if so.
                 # at the stream_ordering, and add them to the list if so.
-                for state_dict in prev_state_ids.itervalues():
+                for state_dict in itervalues(prev_state_ids):
                     prev_event_id = state_dict.get(key, None)
                     prev_event_id = state_dict.get(key, None)
                     if not prev_event_id or prev_event_id != event_id:
                     if not prev_event_id or prev_event_id != event_id:
                         if state_key != user_id:
                         if state_key != user_id:

+ 7 - 6
synapse/handlers/e2e_keys.py

@@ -19,6 +19,7 @@ import logging
 
 
 from canonicaljson import encode_canonical_json
 from canonicaljson import encode_canonical_json
 from twisted.internet import defer
 from twisted.internet import defer
+from six import iteritems
 
 
 from synapse.api.errors import (
 from synapse.api.errors import (
     SynapseError, CodeMessageException, FederationDeniedError,
     SynapseError, CodeMessageException, FederationDeniedError,
@@ -92,7 +93,7 @@ class E2eKeysHandler(object):
         remote_queries_not_in_cache = {}
         remote_queries_not_in_cache = {}
         if remote_queries:
         if remote_queries:
             query_list = []
             query_list = []
-            for user_id, device_ids in remote_queries.iteritems():
+            for user_id, device_ids in iteritems(remote_queries):
                 if device_ids:
                 if device_ids:
                     query_list.extend((user_id, device_id) for device_id in device_ids)
                     query_list.extend((user_id, device_id) for device_id in device_ids)
                 else:
                 else:
@@ -103,9 +104,9 @@ class E2eKeysHandler(object):
                     query_list
                     query_list
                 )
                 )
             )
             )
-            for user_id, devices in remote_results.iteritems():
+            for user_id, devices in iteritems(remote_results):
                 user_devices = results.setdefault(user_id, {})
                 user_devices = results.setdefault(user_id, {})
-                for device_id, device in devices.iteritems():
+                for device_id, device in iteritems(devices):
                     keys = device.get("keys", None)
                     keys = device.get("keys", None)
                     device_display_name = device.get("device_display_name", None)
                     device_display_name = device.get("device_display_name", None)
                     if keys:
                     if keys:
@@ -250,9 +251,9 @@ class E2eKeysHandler(object):
             "Claimed one-time-keys: %s",
             "Claimed one-time-keys: %s",
             ",".join((
             ",".join((
                 "%s for %s:%s" % (key_id, user_id, device_id)
                 "%s for %s:%s" % (key_id, user_id, device_id)
-                for user_id, user_keys in json_result.iteritems()
-                for device_id, device_keys in user_keys.iteritems()
-                for key_id, _ in device_keys.iteritems()
+                for user_id, user_keys in iteritems(json_result)
+                for device_id, device_keys in iteritems(user_keys)
+                for key_id, _ in iteritems(device_keys)
             )),
             )),
         )
         )
 
 

+ 37 - 24
synapse/handlers/federation.py

@@ -24,6 +24,7 @@ from signedjson.key import decode_verify_key_bytes
 from signedjson.sign import verify_signed_json
 from signedjson.sign import verify_signed_json
 import six
 import six
 from six.moves import http_client
 from six.moves import http_client
+from six import iteritems
 from twisted.internet import defer
 from twisted.internet import defer
 from unpaddedbase64 import decode_base64
 from unpaddedbase64 import decode_base64
 
 
@@ -51,7 +52,6 @@ from synapse.util.retryutils import NotRetryingDestination
 
 
 from synapse.util.distributor import user_joined_room
 from synapse.util.distributor import user_joined_room
 
 
-
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -479,7 +479,7 @@ class FederationHandler(BaseHandler):
         # to get all state ids that we're interested in.
         # to get all state ids that we're interested in.
         event_map = yield self.store.get_events([
         event_map = yield self.store.get_events([
             e_id
             e_id
-            for key_to_eid in event_to_state_ids.values()
+            for key_to_eid in list(event_to_state_ids.values())
             for key, e_id in key_to_eid.items()
             for key, e_id in key_to_eid.items()
             if key[0] != EventTypes.Member or check_match(key[1])
             if key[0] != EventTypes.Member or check_match(key[1])
         ])
         ])
@@ -487,10 +487,10 @@ class FederationHandler(BaseHandler):
         event_to_state = {
         event_to_state = {
             e_id: {
             e_id: {
                 key: event_map[inner_e_id]
                 key: event_map[inner_e_id]
-                for key, inner_e_id in key_to_eid.items()
+                for key, inner_e_id in key_to_eid.iteritems()
                 if inner_e_id in event_map
                 if inner_e_id in event_map
             }
             }
-            for e_id, key_to_eid in event_to_state_ids.items()
+            for e_id, key_to_eid in event_to_state_ids.iteritems()
         }
         }
 
 
         def redact_disallowed(event, state):
         def redact_disallowed(event, state):
@@ -505,7 +505,7 @@ class FederationHandler(BaseHandler):
                     # membership states for the requesting server to determine
                     # membership states for the requesting server to determine
                     # if the server is either in the room or has been invited
                     # if the server is either in the room or has been invited
                     # into the room.
                     # into the room.
-                    for ev in state.values():
+                    for ev in state.itervalues():
                         if ev.type != EventTypes.Member:
                         if ev.type != EventTypes.Member:
                             continue
                             continue
                         try:
                         try:
@@ -751,9 +751,19 @@ class FederationHandler(BaseHandler):
         curr_state = yield self.state_handler.get_current_state(room_id)
         curr_state = yield self.state_handler.get_current_state(room_id)
 
 
         def get_domains_from_state(state):
         def get_domains_from_state(state):
+            """Get joined domains from state
+
+            Args:
+                state (dict[tuple, FrozenEvent]): State map from type/state
+                    key to event.
+
+            Returns:
+                list[tuple[str, int]]: Returns a list of servers with the
+                lowest depth of their joins. Sorted by lowest depth first.
+            """
             joined_users = [
             joined_users = [
                 (state_key, int(event.depth))
                 (state_key, int(event.depth))
-                for (e_type, state_key), event in state.items()
+                for (e_type, state_key), event in state.iteritems()
                 if e_type == EventTypes.Member
                 if e_type == EventTypes.Member
                 and event.membership == Membership.JOIN
                 and event.membership == Membership.JOIN
             ]
             ]
@@ -770,7 +780,7 @@ class FederationHandler(BaseHandler):
                 except Exception:
                 except Exception:
                     pass
                     pass
 
 
-            return sorted(joined_domains.items(), key=lambda d: d[1])
+            return sorted(joined_domains.iteritems(), key=lambda d: d[1])
 
 
         curr_domains = get_domains_from_state(curr_state)
         curr_domains = get_domains_from_state(curr_state)
 
 
@@ -787,7 +797,7 @@ class FederationHandler(BaseHandler):
                     yield self.backfill(
                     yield self.backfill(
                         dom, room_id,
                         dom, room_id,
                         limit=100,
                         limit=100,
-                        extremities=[e for e in extremities.keys()]
+                        extremities=extremities,
                     )
                     )
                     # If this succeeded then we probably already have the
                     # If this succeeded then we probably already have the
                     # appropriate stuff.
                     # appropriate stuff.
@@ -833,7 +843,7 @@ class FederationHandler(BaseHandler):
         tried_domains = set(likely_domains)
         tried_domains = set(likely_domains)
         tried_domains.add(self.server_name)
         tried_domains.add(self.server_name)
 
 
-        event_ids = list(extremities.keys())
+        event_ids = list(extremities.iterkeys())
 
 
         logger.debug("calling resolve_state_groups in _maybe_backfill")
         logger.debug("calling resolve_state_groups in _maybe_backfill")
         resolve = logcontext.preserve_fn(
         resolve = logcontext.preserve_fn(
@@ -843,31 +853,34 @@ class FederationHandler(BaseHandler):
             [resolve(room_id, [e]) for e in event_ids],
             [resolve(room_id, [e]) for e in event_ids],
             consumeErrors=True,
             consumeErrors=True,
         ))
         ))
+
+        # dict[str, dict[tuple, str]], a map from event_id to state map of
+        # event_ids.
         states = dict(zip(event_ids, [s.state for s in states]))
         states = dict(zip(event_ids, [s.state for s in states]))
 
 
         state_map = yield self.store.get_events(
         state_map = yield self.store.get_events(
-            [e_id for ids in states.values() for e_id in ids],
+            [e_id for ids in states.itervalues() for e_id in ids.itervalues()],
             get_prev_content=False
             get_prev_content=False
         )
         )
         states = {
         states = {
             key: {
             key: {
                 k: state_map[e_id]
                 k: state_map[e_id]
-                for k, e_id in state_dict.items()
+                for k, e_id in state_dict.iteritems()
                 if e_id in state_map
                 if e_id in state_map
-            } for key, state_dict in states.items()
+            } for key, state_dict in states.iteritems()
         }
         }
 
 
         for e_id, _ in sorted_extremeties_tuple:
         for e_id, _ in sorted_extremeties_tuple:
             likely_domains = get_domains_from_state(states[e_id])
             likely_domains = get_domains_from_state(states[e_id])
 
 
             success = yield try_backfill([
             success = yield try_backfill([
-                dom for dom in likely_domains
+                dom for dom, _ in likely_domains
                 if dom not in tried_domains
                 if dom not in tried_domains
             ])
             ])
             if success:
             if success:
                 defer.returnValue(True)
                 defer.returnValue(True)
 
 
-            tried_domains.update(likely_domains)
+            tried_domains.update(dom for dom, _ in likely_domains)
 
 
         defer.returnValue(False)
         defer.returnValue(False)
 
 
@@ -1135,13 +1148,13 @@ class FederationHandler(BaseHandler):
                 user = UserID.from_string(event.state_key)
                 user = UserID.from_string(event.state_key)
                 yield user_joined_room(self.distributor, user, event.room_id)
                 yield user_joined_room(self.distributor, user, event.room_id)
 
 
-        state_ids = context.prev_state_ids.values()
+        state_ids = list(context.prev_state_ids.values())
         auth_chain = yield self.store.get_auth_chain(state_ids)
         auth_chain = yield self.store.get_auth_chain(state_ids)
 
 
-        state = yield self.store.get_events(context.prev_state_ids.values())
+        state = yield self.store.get_events(list(context.prev_state_ids.values()))
 
 
         defer.returnValue({
         defer.returnValue({
-            "state": state.values(),
+            "state": list(state.values()),
             "auth_chain": auth_chain,
             "auth_chain": auth_chain,
         })
         })
 
 
@@ -1375,7 +1388,7 @@ class FederationHandler(BaseHandler):
         )
         )
 
 
         if state_groups:
         if state_groups:
-            _, state = state_groups.items().pop()
+            _, state = list(iteritems(state_groups)).pop()
             results = {
             results = {
                 (e.type, e.state_key): e for e in state
                 (e.type, e.state_key): e for e in state
             }
             }
@@ -1391,7 +1404,7 @@ class FederationHandler(BaseHandler):
                 else:
                 else:
                     del results[(event.type, event.state_key)]
                     del results[(event.type, event.state_key)]
 
 
-            res = results.values()
+            res = list(results.values())
             for event in res:
             for event in res:
                 # We sign these again because there was a bug where we
                 # We sign these again because there was a bug where we
                 # incorrectly signed things the first time round
                 # incorrectly signed things the first time round
@@ -1432,7 +1445,7 @@ class FederationHandler(BaseHandler):
                 else:
                 else:
                     results.pop((event.type, event.state_key), None)
                     results.pop((event.type, event.state_key), None)
 
 
-            defer.returnValue(results.values())
+            defer.returnValue(list(results.values()))
         else:
         else:
             defer.returnValue([])
             defer.returnValue([])
 
 
@@ -1901,7 +1914,7 @@ class FederationHandler(BaseHandler):
                 })
                 })
 
 
                 new_state = self.state_handler.resolve_events(
                 new_state = self.state_handler.resolve_events(
-                    [local_view.values(), remote_view.values()],
+                    [list(local_view.values()), list(remote_view.values())],
                     event
                     event
                 )
                 )
 
 
@@ -2021,7 +2034,7 @@ class FederationHandler(BaseHandler):
                 this will not be included in the current_state in the context.
                 this will not be included in the current_state in the context.
         """
         """
         state_updates = {
         state_updates = {
-            k: a.event_id for k, a in auth_events.iteritems()
+            k: a.event_id for k, a in iteritems(auth_events)
             if k != event_key
             if k != event_key
         }
         }
         context.current_state_ids = dict(context.current_state_ids)
         context.current_state_ids = dict(context.current_state_ids)
@@ -2031,7 +2044,7 @@ class FederationHandler(BaseHandler):
             context.delta_ids.update(state_updates)
             context.delta_ids.update(state_updates)
         context.prev_state_ids = dict(context.prev_state_ids)
         context.prev_state_ids = dict(context.prev_state_ids)
         context.prev_state_ids.update({
         context.prev_state_ids.update({
-            k: a.event_id for k, a in auth_events.iteritems()
+            k: a.event_id for k, a in iteritems(auth_events)
         })
         })
         context.state_group = yield self.store.store_state_group(
         context.state_group = yield self.store.store_state_group(
             event.event_id,
             event.event_id,
@@ -2083,7 +2096,7 @@ class FederationHandler(BaseHandler):
 
 
         def get_next(it, opt=None):
         def get_next(it, opt=None):
             try:
             try:
-                return it.next()
+                return next(it)
             except Exception:
             except Exception:
                 return opt
                 return opt
 
 

+ 2 - 1
synapse/handlers/groups_local.py

@@ -15,6 +15,7 @@
 # limitations under the License.
 # limitations under the License.
 
 
 from twisted.internet import defer
 from twisted.internet import defer
+from six import iteritems
 
 
 from synapse.api.errors import SynapseError
 from synapse.api.errors import SynapseError
 from synapse.types import get_domain_from_id
 from synapse.types import get_domain_from_id
@@ -449,7 +450,7 @@ class GroupsLocalHandler(object):
 
 
         results = {}
         results = {}
         failed_results = []
         failed_results = []
-        for destination, dest_user_ids in destinations.iteritems():
+        for destination, dest_user_ids in iteritems(destinations):
             try:
             try:
                 r = yield self.transport_client.bulk_get_publicised_groups(
                 r = yield self.transport_client.bulk_get_publicised_groups(
                     destination, list(dest_user_ids),
                     destination, list(dest_user_ids),

+ 8 - 4
synapse/handlers/message.py

@@ -19,6 +19,7 @@ import sys
 
 
 from canonicaljson import encode_canonical_json
 from canonicaljson import encode_canonical_json
 import six
 import six
+from six import string_types, itervalues, iteritems
 from twisted.internet import defer, reactor
 from twisted.internet import defer, reactor
 from twisted.internet.defer import succeed
 from twisted.internet.defer import succeed
 from twisted.python.failure import Failure
 from twisted.python.failure import Failure
@@ -402,7 +403,7 @@ class MessageHandler(BaseHandler):
                 "avatar_url": profile.avatar_url,
                 "avatar_url": profile.avatar_url,
                 "display_name": profile.display_name,
                 "display_name": profile.display_name,
             }
             }
-            for user_id, profile in users_with_profile.iteritems()
+            for user_id, profile in iteritems(users_with_profile)
         })
         })
 
 
 
 
@@ -571,6 +572,9 @@ class EventCreationHandler(object):
 
 
         u = yield self.store.get_user_by_id(user_id)
         u = yield self.store.get_user_by_id(user_id)
         assert u is not None
         assert u is not None
+        if u["appservice_id"] is not None:
+            # users registered by an appservice are exempt
+            return
         if u["consent_version"] == self.config.user_consent_version:
         if u["consent_version"] == self.config.user_consent_version:
             return
             return
 
 
@@ -667,7 +671,7 @@ class EventCreationHandler(object):
 
 
             spam_error = self.spam_checker.check_event_for_spam(event)
             spam_error = self.spam_checker.check_event_for_spam(event)
             if spam_error:
             if spam_error:
-                if not isinstance(spam_error, basestring):
+                if not isinstance(spam_error, string_types):
                     spam_error = "Spam is not permitted here"
                     spam_error = "Spam is not permitted here"
                 raise SynapseError(
                 raise SynapseError(
                     403, spam_error, Codes.FORBIDDEN
                     403, spam_error, Codes.FORBIDDEN
@@ -881,7 +885,7 @@ class EventCreationHandler(object):
 
 
                 state_to_include_ids = [
                 state_to_include_ids = [
                     e_id
                     e_id
-                    for k, e_id in context.current_state_ids.iteritems()
+                    for k, e_id in iteritems(context.current_state_ids)
                     if k[0] in self.hs.config.room_invite_state_types
                     if k[0] in self.hs.config.room_invite_state_types
                     or k == (EventTypes.Member, event.sender)
                     or k == (EventTypes.Member, event.sender)
                 ]
                 ]
@@ -895,7 +899,7 @@ class EventCreationHandler(object):
                         "content": e.content,
                         "content": e.content,
                         "sender": e.sender,
                         "sender": e.sender,
                     }
                     }
-                    for e in state_to_include.itervalues()
+                    for e in itervalues(state_to_include)
                 ]
                 ]
 
 
                 invitee = UserID.from_string(event.state_key)
                 invitee = UserID.from_string(event.state_key)

+ 46 - 40
synapse/handlers/presence.py

@@ -25,6 +25,8 @@ The methods that define policy are:
 from twisted.internet import defer, reactor
 from twisted.internet import defer, reactor
 from contextlib import contextmanager
 from contextlib import contextmanager
 
 
+from six import itervalues, iteritems
+
 from synapse.api.errors import SynapseError
 from synapse.api.errors import SynapseError
 from synapse.api.constants import PresenceState
 from synapse.api.constants import PresenceState
 from synapse.storage.presence import UserPresenceState
 from synapse.storage.presence import UserPresenceState
@@ -36,27 +38,29 @@ from synapse.util.logutils import log_function
 from synapse.util.metrics import Measure
 from synapse.util.metrics import Measure
 from synapse.util.wheel_timer import WheelTimer
 from synapse.util.wheel_timer import WheelTimer
 from synapse.types import UserID, get_domain_from_id
 from synapse.types import UserID, get_domain_from_id
-import synapse.metrics
+from synapse.metrics import LaterGauge
 
 
 import logging
 import logging
 
 
+from prometheus_client import Counter
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
 
 
-notified_presence_counter = metrics.register_counter("notified_presence")
-federation_presence_out_counter = metrics.register_counter("federation_presence_out")
-presence_updates_counter = metrics.register_counter("presence_updates")
-timers_fired_counter = metrics.register_counter("timers_fired")
-federation_presence_counter = metrics.register_counter("federation_presence")
-bump_active_time_counter = metrics.register_counter("bump_active_time")
+notified_presence_counter = Counter("synapse_handler_presence_notified_presence", "")
+federation_presence_out_counter = Counter(
+    "synapse_handler_presence_federation_presence_out", "")
+presence_updates_counter = Counter("synapse_handler_presence_presence_updates", "")
+timers_fired_counter = Counter("synapse_handler_presence_timers_fired", "")
+federation_presence_counter = Counter("synapse_handler_presence_federation_presence", "")
+bump_active_time_counter = Counter("synapse_handler_presence_bump_active_time", "")
 
 
-get_updates_counter = metrics.register_counter("get_updates", labels=["type"])
+get_updates_counter = Counter("synapse_handler_presence_get_updates", "", ["type"])
 
 
-notify_reason_counter = metrics.register_counter("notify_reason", labels=["reason"])
-state_transition_counter = metrics.register_counter(
-    "state_transition", labels=["from", "to"]
+notify_reason_counter = Counter(
+    "synapse_handler_presence_notify_reason", "", ["reason"])
+state_transition_counter = Counter(
+    "synapse_handler_presence_state_transition", "", ["from", "to"]
 )
 )
 
 
 
 
@@ -141,8 +145,9 @@ class PresenceHandler(object):
             for state in active_presence
             for state in active_presence
         }
         }
 
 
-        metrics.register_callback(
-            "user_to_current_state_size", lambda: len(self.user_to_current_state)
+        LaterGauge(
+            "synapse_handlers_presence_user_to_current_state_size", "", [],
+            lambda: len(self.user_to_current_state)
         )
         )
 
 
         now = self.clock.time_msec()
         now = self.clock.time_msec()
@@ -212,7 +217,8 @@ class PresenceHandler(object):
             60 * 1000,
             60 * 1000,
         )
         )
 
 
-        metrics.register_callback("wheel_timer_size", lambda: len(self.wheel_timer))
+        LaterGauge("synapse_handlers_presence_wheel_timer_size", "", [],
+                   lambda: len(self.wheel_timer))
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
     def _on_shutdown(self):
     def _on_shutdown(self):
@@ -315,11 +321,11 @@ class PresenceHandler(object):
 
 
             # TODO: We should probably ensure there are no races hereafter
             # TODO: We should probably ensure there are no races hereafter
 
 
-            presence_updates_counter.inc_by(len(new_states))
+            presence_updates_counter.inc(len(new_states))
 
 
             if to_notify:
             if to_notify:
-                notified_presence_counter.inc_by(len(to_notify))
-                yield self._persist_and_notify(to_notify.values())
+                notified_presence_counter.inc(len(to_notify))
+                yield self._persist_and_notify(list(to_notify.values()))
 
 
             self.unpersisted_users_changes |= set(s.user_id for s in new_states)
             self.unpersisted_users_changes |= set(s.user_id for s in new_states)
             self.unpersisted_users_changes -= set(to_notify.keys())
             self.unpersisted_users_changes -= set(to_notify.keys())
@@ -329,7 +335,7 @@ class PresenceHandler(object):
                 if user_id not in to_notify
                 if user_id not in to_notify
             }
             }
             if to_federation_ping:
             if to_federation_ping:
-                federation_presence_out_counter.inc_by(len(to_federation_ping))
+                federation_presence_out_counter.inc(len(to_federation_ping))
 
 
                 self._push_to_remotes(to_federation_ping.values())
                 self._push_to_remotes(to_federation_ping.values())
 
 
@@ -367,7 +373,7 @@ class PresenceHandler(object):
                     for user_id in users_to_check
                     for user_id in users_to_check
                 ]
                 ]
 
 
-                timers_fired_counter.inc_by(len(states))
+                timers_fired_counter.inc(len(states))
 
 
                 changes = handle_timeouts(
                 changes = handle_timeouts(
                     states,
                     states,
@@ -530,7 +536,7 @@ class PresenceHandler(object):
                 prev_state.copy_and_replace(
                 prev_state.copy_and_replace(
                     last_user_sync_ts=time_now_ms,
                     last_user_sync_ts=time_now_ms,
                 )
                 )
-                for prev_state in prev_states.itervalues()
+                for prev_state in itervalues(prev_states)
             ])
             ])
             self.external_process_last_updated_ms.pop(process_id, None)
             self.external_process_last_updated_ms.pop(process_id, None)
 
 
@@ -553,14 +559,14 @@ class PresenceHandler(object):
             for user_id in user_ids
             for user_id in user_ids
         }
         }
 
 
-        missing = [user_id for user_id, state in states.iteritems() if not state]
+        missing = [user_id for user_id, state in iteritems(states) if not state]
         if missing:
         if missing:
             # There are things not in our in memory cache. Lets pull them out of
             # There are things not in our in memory cache. Lets pull them out of
             # the database.
             # the database.
             res = yield self.store.get_presence_for_users(missing)
             res = yield self.store.get_presence_for_users(missing)
             states.update(res)
             states.update(res)
 
 
-            missing = [user_id for user_id, state in states.iteritems() if not state]
+            missing = [user_id for user_id, state in iteritems(states) if not state]
             if missing:
             if missing:
                 new = {
                 new = {
                     user_id: UserPresenceState.default(user_id)
                     user_id: UserPresenceState.default(user_id)
@@ -656,7 +662,7 @@ class PresenceHandler(object):
             updates.append(prev_state.copy_and_replace(**new_fields))
             updates.append(prev_state.copy_and_replace(**new_fields))
 
 
         if updates:
         if updates:
-            federation_presence_counter.inc_by(len(updates))
+            federation_presence_counter.inc(len(updates))
             yield self._update_states(updates)
             yield self._update_states(updates)
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
@@ -681,7 +687,7 @@ class PresenceHandler(object):
         """
         """
 
 
         updates = yield self.current_state_for_users(target_user_ids)
         updates = yield self.current_state_for_users(target_user_ids)
-        updates = updates.values()
+        updates = list(updates.values())
 
 
         for user_id in set(target_user_ids) - set(u.user_id for u in updates):
         for user_id in set(target_user_ids) - set(u.user_id for u in updates):
             updates.append(UserPresenceState.default(user_id))
             updates.append(UserPresenceState.default(user_id))
@@ -747,11 +753,11 @@ class PresenceHandler(object):
             self._push_to_remotes([state])
             self._push_to_remotes([state])
         else:
         else:
             user_ids = yield self.store.get_users_in_room(room_id)
             user_ids = yield self.store.get_users_in_room(room_id)
-            user_ids = filter(self.is_mine_id, user_ids)
+            user_ids = list(filter(self.is_mine_id, user_ids))
 
 
             states = yield self.current_state_for_users(user_ids)
             states = yield self.current_state_for_users(user_ids)
 
 
-            self._push_to_remotes(states.values())
+            self._push_to_remotes(list(states.values()))
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
     def get_presence_list(self, observer_user, accepted=None):
     def get_presence_list(self, observer_user, accepted=None):
@@ -931,28 +937,28 @@ def should_notify(old_state, new_state):
         return False
         return False
 
 
     if old_state.status_msg != new_state.status_msg:
     if old_state.status_msg != new_state.status_msg:
-        notify_reason_counter.inc("status_msg_change")
+        notify_reason_counter.labels("status_msg_change").inc()
         return True
         return True
 
 
     if old_state.state != new_state.state:
     if old_state.state != new_state.state:
-        notify_reason_counter.inc("state_change")
-        state_transition_counter.inc(old_state.state, new_state.state)
+        notify_reason_counter.labels("state_change").inc()
+        state_transition_counter.labels(old_state.state, new_state.state).inc()
         return True
         return True
 
 
     if old_state.state == PresenceState.ONLINE:
     if old_state.state == PresenceState.ONLINE:
         if new_state.currently_active != old_state.currently_active:
         if new_state.currently_active != old_state.currently_active:
-            notify_reason_counter.inc("current_active_change")
+            notify_reason_counter.labels("current_active_change").inc()
             return True
             return True
 
 
         if new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
         if new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
             # Only notify about last active bumps if we're not currently acive
             # Only notify about last active bumps if we're not currently acive
             if not new_state.currently_active:
             if not new_state.currently_active:
-                notify_reason_counter.inc("last_active_change_online")
+                notify_reason_counter.labels("last_active_change_online").inc()
                 return True
                 return True
 
 
     elif new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
     elif new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
         # Always notify for a transition where last active gets bumped.
         # Always notify for a transition where last active gets bumped.
-        notify_reason_counter.inc("last_active_change_not_online")
+        notify_reason_counter.labels("last_active_change_not_online").inc()
         return True
         return True
 
 
     return False
     return False
@@ -1026,14 +1032,14 @@ class PresenceEventSource(object):
             if changed is not None and len(changed) < 500:
             if changed is not None and len(changed) < 500:
                 # For small deltas, its quicker to get all changes and then
                 # For small deltas, its quicker to get all changes and then
                 # work out if we share a room or they're in our presence list
                 # work out if we share a room or they're in our presence list
-                get_updates_counter.inc("stream")
+                get_updates_counter.labels("stream").inc()
                 for other_user_id in changed:
                 for other_user_id in changed:
                     if other_user_id in users_interested_in:
                     if other_user_id in users_interested_in:
                         user_ids_changed.add(other_user_id)
                         user_ids_changed.add(other_user_id)
             else:
             else:
                 # Too many possible updates. Find all users we can see and check
                 # Too many possible updates. Find all users we can see and check
                 # if any of them have changed.
                 # if any of them have changed.
-                get_updates_counter.inc("full")
+                get_updates_counter.labels("full").inc()
 
 
                 if from_key:
                 if from_key:
                     user_ids_changed = stream_change_cache.get_entities_changed(
                     user_ids_changed = stream_change_cache.get_entities_changed(
@@ -1045,10 +1051,10 @@ class PresenceEventSource(object):
             updates = yield presence.current_state_for_users(user_ids_changed)
             updates = yield presence.current_state_for_users(user_ids_changed)
 
 
         if include_offline:
         if include_offline:
-            defer.returnValue((updates.values(), max_token))
+            defer.returnValue((list(updates.values()), max_token))
         else:
         else:
             defer.returnValue(([
             defer.returnValue(([
-                s for s in updates.itervalues()
+                s for s in itervalues(updates)
                 if s.state != PresenceState.OFFLINE
                 if s.state != PresenceState.OFFLINE
             ], max_token))
             ], max_token))
 
 
@@ -1106,7 +1112,7 @@ def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now):
         if new_state:
         if new_state:
             changes[state.user_id] = new_state
             changes[state.user_id] = new_state
 
 
-    return changes.values()
+    return list(changes.values())
 
 
 
 
 def handle_timeout(state, is_mine, syncing_user_ids, now):
 def handle_timeout(state, is_mine, syncing_user_ids, now):
@@ -1305,11 +1311,11 @@ def get_interested_remotes(store, states, state_handler):
     # hosts in those rooms.
     # hosts in those rooms.
     room_ids_to_states, users_to_states = yield get_interested_parties(store, states)
     room_ids_to_states, users_to_states = yield get_interested_parties(store, states)
 
 
-    for room_id, states in room_ids_to_states.iteritems():
+    for room_id, states in iteritems(room_ids_to_states):
         hosts = yield state_handler.get_current_hosts_in_room(room_id)
         hosts = yield state_handler.get_current_hosts_in_room(room_id)
         hosts_and_states.append((hosts, states))
         hosts_and_states.append((hosts, states))
 
 
-    for user_id, states in users_to_states.iteritems():
+    for user_id, states in iteritems(users_to_states):
         host = get_domain_from_id(user_id)
         host = get_domain_from_id(user_id)
         hosts_and_states.append(([host], states))
         hosts_and_states.append(([host], states))
 
 

+ 1 - 1
synapse/handlers/room.py

@@ -455,7 +455,7 @@ class RoomContextHandler(BaseHandler):
         state = yield self.store.get_state_for_events(
         state = yield self.store.get_state_for_events(
             [last_event_id], None
             [last_event_id], None
         )
         )
-        results["state"] = state[last_event_id].values()
+        results["state"] = list(state[last_event_id].values())
 
 
         results["start"] = now_token.copy_and_replace(
         results["start"] = now_token.copy_and_replace(
             "room_key", results["start"]
             "room_key", results["start"]

+ 2 - 1
synapse/handlers/room_list.py

@@ -15,6 +15,7 @@
 
 
 from twisted.internet import defer
 from twisted.internet import defer
 
 
+from six import iteritems
 from six.moves import range
 from six.moves import range
 
 
 from ._base import BaseHandler
 from ._base import BaseHandler
@@ -307,7 +308,7 @@ class RoomListHandler(BaseHandler):
         )
         )
 
 
         event_map = yield self.store.get_events([
         event_map = yield self.store.get_events([
-            event_id for key, event_id in current_state_ids.iteritems()
+            event_id for key, event_id in iteritems(current_state_ids)
             if key[0] in (
             if key[0] in (
                 EventTypes.JoinRules,
                 EventTypes.JoinRules,
                 EventTypes.Name,
                 EventTypes.Name,

+ 14 - 10
synapse/handlers/room_member.py

@@ -298,16 +298,6 @@ class RoomMemberHandler(object):
             is_blocked = yield self.store.is_room_blocked(room_id)
             is_blocked = yield self.store.is_room_blocked(room_id)
             if is_blocked:
             if is_blocked:
                 raise SynapseError(403, "This room has been blocked on this server")
                 raise SynapseError(403, "This room has been blocked on this server")
-        else:
-            # we don't allow people to reject invites to, or leave, the
-            # server notice room.
-            is_blocked = yield self._is_server_notice_room(room_id)
-            if is_blocked:
-                raise SynapseError(
-                    http_client.FORBIDDEN,
-                    "You cannot leave this room",
-                    errcode=Codes.CANNOT_LEAVE_SERVER_NOTICE_ROOM,
-                )
 
 
         if effective_membership_state == Membership.INVITE:
         if effective_membership_state == Membership.INVITE:
             # block any attempts to invite the server notices mxid
             # block any attempts to invite the server notices mxid
@@ -383,6 +373,20 @@ class RoomMemberHandler(object):
                 if same_sender and same_membership and same_content:
                 if same_sender and same_membership and same_content:
                     defer.returnValue(old_state)
                     defer.returnValue(old_state)
 
 
+            # we don't allow people to reject invites to the server notice
+            # room, but they can leave it once they are joined.
+            if (
+                old_membership == Membership.INVITE and
+                effective_membership_state == Membership.LEAVE
+            ):
+                is_blocked = yield self._is_server_notice_room(room_id)
+                if is_blocked:
+                    raise SynapseError(
+                        http_client.FORBIDDEN,
+                        "You cannot reject this invite",
+                        errcode=Codes.CANNOT_LEAVE_SERVER_NOTICE_ROOM,
+                    )
+
         is_host_in_room = yield self._is_host_in_room(current_state_ids)
         is_host_in_room = yield self._is_host_in_room(current_state_ids)
 
 
         if effective_membership_state == Membership.JOIN:
         if effective_membership_state == Membership.JOIN:

+ 1 - 1
synapse/handlers/search.py

@@ -348,7 +348,7 @@ class SearchHandler(BaseHandler):
             rooms = set(e.room_id for e in allowed_events)
             rooms = set(e.room_id for e in allowed_events)
             for room_id in rooms:
             for room_id in rooms:
                 state = yield self.state_handler.get_current_state(room_id)
                 state = yield self.state_handler.get_current_state(room_id)
-                state_results[room_id] = state.values()
+                state_results[room_id] = list(state.values())
 
 
             state_results.values()
             state_results.values()
 
 

+ 22 - 10
synapse/handlers/sync.py

@@ -28,6 +28,8 @@ import collections
 import logging
 import logging
 import itertools
 import itertools
 
 
+from six import itervalues, iteritems
+
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -275,7 +277,7 @@ class SyncHandler(object):
                 # result returned by the event source is poor form (it might cache
                 # result returned by the event source is poor form (it might cache
                 # the object)
                 # the object)
                 room_id = event["room_id"]
                 room_id = event["room_id"]
-                event_copy = {k: v for (k, v) in event.iteritems()
+                event_copy = {k: v for (k, v) in iteritems(event)
                               if k != "room_id"}
                               if k != "room_id"}
                 ephemeral_by_room.setdefault(room_id, []).append(event_copy)
                 ephemeral_by_room.setdefault(room_id, []).append(event_copy)
 
 
@@ -294,7 +296,7 @@ class SyncHandler(object):
             for event in receipts:
             for event in receipts:
                 room_id = event["room_id"]
                 room_id = event["room_id"]
                 # exclude room id, as above
                 # exclude room id, as above
-                event_copy = {k: v for (k, v) in event.iteritems()
+                event_copy = {k: v for (k, v) in iteritems(event)
                               if k != "room_id"}
                               if k != "room_id"}
                 ephemeral_by_room.setdefault(room_id, []).append(event_copy)
                 ephemeral_by_room.setdefault(room_id, []).append(event_copy)
 
 
@@ -325,7 +327,7 @@ class SyncHandler(object):
                 current_state_ids = frozenset()
                 current_state_ids = frozenset()
                 if any(e.is_state() for e in recents):
                 if any(e.is_state() for e in recents):
                     current_state_ids = yield self.state.get_current_state_ids(room_id)
                     current_state_ids = yield self.state.get_current_state_ids(room_id)
-                    current_state_ids = frozenset(current_state_ids.itervalues())
+                    current_state_ids = frozenset(itervalues(current_state_ids))
 
 
                 recents = yield filter_events_for_client(
                 recents = yield filter_events_for_client(
                     self.store,
                     self.store,
@@ -382,7 +384,7 @@ class SyncHandler(object):
                 current_state_ids = frozenset()
                 current_state_ids = frozenset()
                 if any(e.is_state() for e in loaded_recents):
                 if any(e.is_state() for e in loaded_recents):
                     current_state_ids = yield self.state.get_current_state_ids(room_id)
                     current_state_ids = yield self.state.get_current_state_ids(room_id)
-                    current_state_ids = frozenset(current_state_ids.itervalues())
+                    current_state_ids = frozenset(itervalues(current_state_ids))
 
 
                 loaded_recents = yield filter_events_for_client(
                 loaded_recents = yield filter_events_for_client(
                     self.store,
                     self.store,
@@ -441,6 +443,10 @@ class SyncHandler(object):
         Returns:
         Returns:
             A Deferred map from ((type, state_key)->Event)
             A Deferred map from ((type, state_key)->Event)
         """
         """
+        # FIXME this claims to get the state at a stream position, but
+        # get_recent_events_for_room operates by topo ordering. This therefore
+        # does not reliably give you the state at the given stream position.
+        # (https://github.com/matrix-org/synapse/issues/3305)
         last_events, _ = yield self.store.get_recent_events_for_room(
         last_events, _ = yield self.store.get_recent_events_for_room(
             room_id, end_token=stream_position.room_key, limit=1,
             room_id, end_token=stream_position.room_key, limit=1,
         )
         )
@@ -535,11 +541,11 @@ class SyncHandler(object):
 
 
         state = {}
         state = {}
         if state_ids:
         if state_ids:
-            state = yield self.store.get_events(state_ids.values())
+            state = yield self.store.get_events(list(state_ids.values()))
 
 
         defer.returnValue({
         defer.returnValue({
             (e.type, e.state_key): e
             (e.type, e.state_key): e
-            for e in sync_config.filter_collection.filter_room_state(state.values())
+            for e in sync_config.filter_collection.filter_room_state(list(state.values()))
         })
         })
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
@@ -888,7 +894,7 @@ class SyncHandler(object):
             presence.extend(states)
             presence.extend(states)
 
 
             # Deduplicate the presence entries so that there's at most one per user
             # Deduplicate the presence entries so that there's at most one per user
-            presence = {p.user_id: p for p in presence}.values()
+            presence = list({p.user_id: p for p in presence}.values())
 
 
         presence = sync_config.filter_collection.filter_presence(
         presence = sync_config.filter_collection.filter_presence(
             presence
             presence
@@ -984,7 +990,7 @@ class SyncHandler(object):
         if since_token:
         if since_token:
             for joined_sync in sync_result_builder.joined:
             for joined_sync in sync_result_builder.joined:
                 it = itertools.chain(
                 it = itertools.chain(
-                    joined_sync.timeline.events, joined_sync.state.itervalues()
+                    joined_sync.timeline.events, itervalues(joined_sync.state)
                 )
                 )
                 for event in it:
                 for event in it:
                     if event.type == EventTypes.Member:
                     if event.type == EventTypes.Member:
@@ -1040,7 +1046,13 @@ class SyncHandler(object):
 
 
         Returns:
         Returns:
             Deferred(tuple): Returns a tuple of the form:
             Deferred(tuple): Returns a tuple of the form:
-            `([RoomSyncResultBuilder], [InvitedSyncResult], newly_joined_rooms)`
+            `(room_entries, invited_rooms, newly_joined_rooms, newly_left_rooms)`
+
+            where:
+                room_entries is a list [RoomSyncResultBuilder]
+                invited_rooms is a list [InvitedSyncResult]
+                newly_joined rooms is a list[str] of room ids
+                newly_left_rooms is a list[str] of room ids
         """
         """
         user_id = sync_result_builder.sync_config.user.to_string()
         user_id = sync_result_builder.sync_config.user.to_string()
         since_token = sync_result_builder.since_token
         since_token = sync_result_builder.since_token
@@ -1062,7 +1074,7 @@ class SyncHandler(object):
         newly_left_rooms = []
         newly_left_rooms = []
         room_entries = []
         room_entries = []
         invited = []
         invited = []
-        for room_id, events in mem_change_events_by_room_id.iteritems():
+        for room_id, events in iteritems(mem_change_events_by_room_id):
             non_joins = [e for e in events if e.membership != Membership.JOIN]
             non_joins = [e for e in events if e.membership != Membership.JOIN]
             has_join = len(non_joins) != len(events)
             has_join = len(non_joins) != len(events)
 
 

+ 9 - 1
synapse/handlers/user_directory.py

@@ -22,6 +22,7 @@ from synapse.util.metrics import Measure
 from synapse.util.async import sleep
 from synapse.util.async import sleep
 from synapse.types import get_localpart_from_id
 from synapse.types import get_localpart_from_id
 
 
+from six import iteritems
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
@@ -122,6 +123,13 @@ class UserDirectoryHandler(object):
             user_id, profile.display_name, profile.avatar_url, None,
             user_id, profile.display_name, profile.avatar_url, None,
         )
         )
 
 
+    @defer.inlineCallbacks
+    def handle_user_deactivated(self, user_id):
+        """Called when a user ID is deactivated
+        """
+        yield self.store.remove_from_user_dir(user_id)
+        yield self.store.remove_from_user_in_public_room(user_id)
+
     @defer.inlineCallbacks
     @defer.inlineCallbacks
     def _unsafe_process(self):
     def _unsafe_process(self):
         # If self.pos is None then means we haven't fetched it from DB
         # If self.pos is None then means we haven't fetched it from DB
@@ -403,7 +411,7 @@ class UserDirectoryHandler(object):
 
 
         if change:
         if change:
             users_with_profile = yield self.state.get_current_user_in_room(room_id)
             users_with_profile = yield self.state.get_current_user_in_room(room_id)
-            for user_id, profile in users_with_profile.iteritems():
+            for user_id, profile in iteritems(users_with_profile):
                 yield self._handle_new_user(room_id, user_id, profile)
                 yield self._handle_new_user(room_id, user_id, profile)
         else:
         else:
             users = yield self.store.get_users_in_public_due_to_room(room_id)
             users = yield self.store.get_users_in_public_due_to_room(room_id)

+ 7 - 14
synapse/http/client.py

@@ -23,7 +23,6 @@ from synapse.http import cancelled_to_request_timed_out_error
 from synapse.util.async import add_timeout_to_deferred
 from synapse.util.async import add_timeout_to_deferred
 from synapse.util.caches import CACHE_SIZE_FACTOR
 from synapse.util.caches import CACHE_SIZE_FACTOR
 from synapse.util.logcontext import make_deferred_yieldable
 from synapse.util.logcontext import make_deferred_yieldable
-import synapse.metrics
 from synapse.http.endpoint import SpiderEndpoint
 from synapse.http.endpoint import SpiderEndpoint
 
 
 from canonicaljson import encode_canonical_json
 from canonicaljson import encode_canonical_json
@@ -42,6 +41,7 @@ from twisted.web._newclient import ResponseDone
 
 
 from six import StringIO
 from six import StringIO
 
 
+from prometheus_client import Counter
 import simplejson as json
 import simplejson as json
 import logging
 import logging
 import urllib
 import urllib
@@ -49,16 +49,9 @@ import urllib
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
-
-outgoing_requests_counter = metrics.register_counter(
-    "requests",
-    labels=["method"],
-)
-incoming_responses_counter = metrics.register_counter(
-    "responses",
-    labels=["method", "code"],
-)
+outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"])
+incoming_responses_counter = Counter("synapse_http_client_responses", "",
+                                     ["method", "code"])
 
 
 
 
 class SimpleHttpClient(object):
 class SimpleHttpClient(object):
@@ -95,7 +88,7 @@ class SimpleHttpClient(object):
     def request(self, method, uri, *args, **kwargs):
     def request(self, method, uri, *args, **kwargs):
         # A small wrapper around self.agent.request() so we can easily attach
         # A small wrapper around self.agent.request() so we can easily attach
         # counters to it
         # counters to it
-        outgoing_requests_counter.inc(method)
+        outgoing_requests_counter.labels(method).inc()
 
 
         logger.info("Sending request %s %s", method, uri)
         logger.info("Sending request %s %s", method, uri)
 
 
@@ -109,14 +102,14 @@ class SimpleHttpClient(object):
             )
             )
             response = yield make_deferred_yieldable(request_deferred)
             response = yield make_deferred_yieldable(request_deferred)
 
 
-            incoming_responses_counter.inc(method, response.code)
+            incoming_responses_counter.labels(method, response.code).inc()
             logger.info(
             logger.info(
                 "Received response to  %s %s: %s",
                 "Received response to  %s %s: %s",
                 method, uri, response.code
                 method, uri, response.code
             )
             )
             defer.returnValue(response)
             defer.returnValue(response)
         except Exception as e:
         except Exception as e:
-            incoming_responses_counter.inc(method, "ERR")
+            incoming_responses_counter.labels(method, "ERR").inc()
             logger.info(
             logger.info(
                 "Error sending request to  %s %s: %s %s",
                 "Error sending request to  %s %s: %s %s",
                 method, uri, type(e).__name__, e.message
                 method, uri, type(e).__name__, e.message

+ 11 - 13
synapse/http/matrixfederationclient.py

@@ -42,20 +42,18 @@ import random
 import sys
 import sys
 import urllib
 import urllib
 from six.moves.urllib import parse as urlparse
 from six.moves.urllib import parse as urlparse
+from six import string_types
+
+
+from prometheus_client import Counter
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 outbound_logger = logging.getLogger("synapse.http.outbound")
 outbound_logger = logging.getLogger("synapse.http.outbound")
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
-
-outgoing_requests_counter = metrics.register_counter(
-    "requests",
-    labels=["method"],
-)
-incoming_responses_counter = metrics.register_counter(
-    "responses",
-    labels=["method", "code"],
-)
+outgoing_requests_counter = Counter("synapse_http_matrixfederationclient_requests",
+                                    "", ["method"])
+incoming_responses_counter = Counter("synapse_http_matrixfederationclient_responses",
+                                     "", ["method", "code"])
 
 
 
 
 MAX_LONG_RETRIES = 10
 MAX_LONG_RETRIES = 10
@@ -553,7 +551,7 @@ class MatrixFederationHttpClient(object):
 
 
         encoded_args = {}
         encoded_args = {}
         for k, vs in args.items():
         for k, vs in args.items():
-            if isinstance(vs, basestring):
+            if isinstance(vs, string_types):
                 vs = [vs]
                 vs = [vs]
             encoded_args[k] = [v.encode("UTF-8") for v in vs]
             encoded_args[k] = [v.encode("UTF-8") for v in vs]
 
 
@@ -668,7 +666,7 @@ def check_content_type_is_json(headers):
         RuntimeError if the
         RuntimeError if the
 
 
     """
     """
-    c_type = headers.getRawHeaders("Content-Type")
+    c_type = headers.getRawHeaders(b"Content-Type")
     if c_type is None:
     if c_type is None:
         raise RuntimeError(
         raise RuntimeError(
             "No Content-Type header"
             "No Content-Type header"
@@ -685,7 +683,7 @@ def check_content_type_is_json(headers):
 def encode_query_args(args):
 def encode_query_args(args):
     encoded_args = {}
     encoded_args = {}
     for k, vs in args.items():
     for k, vs in args.items():
-        if isinstance(vs, basestring):
+        if isinstance(vs, string_types):
             vs = [vs]
             vs = [vs]
         encoded_args[k] = [v.encode("UTF-8") for v in vs]
         encoded_args[k] = [v.encode("UTF-8") for v in vs]
 
 

+ 91 - 125
synapse/http/request_metrics.py

@@ -16,137 +16,109 @@
 
 
 import logging
 import logging
 
 
-import synapse.metrics
+from prometheus_client.core import Counter, Histogram
+from synapse.metrics import LaterGauge
+
 from synapse.util.logcontext import LoggingContext
 from synapse.util.logcontext import LoggingContext
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
-metrics = synapse.metrics.get_metrics_for("synapse.http.server")
 
 
 # total number of responses served, split by method/servlet/tag
 # total number of responses served, split by method/servlet/tag
-response_count = metrics.register_counter(
-    "response_count",
-    labels=["method", "servlet", "tag"],
-    alternative_names=(
-        # the following are all deprecated aliases for the same metric
-        metrics.name_prefix + x for x in (
-            "_requests",
-            "_response_time:count",
-            "_response_ru_utime:count",
-            "_response_ru_stime:count",
-            "_response_db_txn_count:count",
-            "_response_db_txn_duration:count",
-        )
-    )
+response_count = Counter(
+    "synapse_http_server_response_count", "", ["method", "servlet", "tag"]
 )
 )
 
 
-requests_counter = metrics.register_counter(
-    "requests_received",
-    labels=["method", "servlet", ],
+requests_counter = Counter(
+    "synapse_http_server_requests_received", "", ["method", "servlet"]
 )
 )
 
 
-outgoing_responses_counter = metrics.register_counter(
-    "responses",
-    labels=["method", "code"],
+outgoing_responses_counter = Counter(
+    "synapse_http_server_responses", "", ["method", "code"]
 )
 )
 
 
-response_timer = metrics.register_counter(
-    "response_time_seconds",
-    labels=["method", "servlet", "tag"],
-    alternative_names=(
-        metrics.name_prefix + "_response_time:total",
-    ),
+response_timer = Histogram(
+    "synapse_http_server_response_time_seconds", "sec", ["method", "servlet", "tag"]
 )
 )
 
 
-response_ru_utime = metrics.register_counter(
-    "response_ru_utime_seconds", labels=["method", "servlet", "tag"],
-    alternative_names=(
-        metrics.name_prefix + "_response_ru_utime:total",
-    ),
+response_ru_utime = Counter(
+    "synapse_http_server_response_ru_utime_seconds", "sec", ["method", "servlet", "tag"]
 )
 )
 
 
-response_ru_stime = metrics.register_counter(
-    "response_ru_stime_seconds", labels=["method", "servlet", "tag"],
-    alternative_names=(
-        metrics.name_prefix + "_response_ru_stime:total",
-    ),
+response_ru_stime = Counter(
+    "synapse_http_server_response_ru_stime_seconds", "sec", ["method", "servlet", "tag"]
 )
 )
 
 
-response_db_txn_count = metrics.register_counter(
-    "response_db_txn_count", labels=["method", "servlet", "tag"],
-    alternative_names=(
-        metrics.name_prefix + "_response_db_txn_count:total",
-    ),
+response_db_txn_count = Counter(
+    "synapse_http_server_response_db_txn_count", "", ["method", "servlet", "tag"]
 )
 )
 
 
 # seconds spent waiting for db txns, excluding scheduling time, when processing
 # seconds spent waiting for db txns, excluding scheduling time, when processing
 # this request
 # this request
-response_db_txn_duration = metrics.register_counter(
-    "response_db_txn_duration_seconds", labels=["method", "servlet", "tag"],
-    alternative_names=(
-        metrics.name_prefix + "_response_db_txn_duration:total",
-    ),
+response_db_txn_duration = Counter(
+    "synapse_http_server_response_db_txn_duration_seconds",
+    "",
+    ["method", "servlet", "tag"],
 )
 )
 
 
 # seconds spent waiting for a db connection, when processing this request
 # seconds spent waiting for a db connection, when processing this request
-response_db_sched_duration = metrics.register_counter(
-    "response_db_sched_duration_seconds", labels=["method", "servlet", "tag"]
+response_db_sched_duration = Counter(
+    "synapse_http_server_response_db_sched_duration_seconds",
+    "",
+    ["method", "servlet", "tag"],
 )
 )
 
 
 # size in bytes of the response written
 # size in bytes of the response written
-response_size = metrics.register_counter(
-    "response_size", labels=["method", "servlet", "tag"]
+response_size = Counter(
+    "synapse_http_server_response_size", "", ["method", "servlet", "tag"]
 )
 )
 
 
 # In flight metrics are incremented while the requests are in flight, rather
 # In flight metrics are incremented while the requests are in flight, rather
 # than when the response was written.
 # than when the response was written.
 
 
-in_flight_requests_ru_utime = metrics.register_counter(
-    "in_flight_requests_ru_utime_seconds", labels=["method", "servlet"],
+in_flight_requests_ru_utime = Counter(
+    "synapse_http_server_in_flight_requests_ru_utime_seconds",
+    "",
+    ["method", "servlet"],
 )
 )
 
 
-in_flight_requests_ru_stime = metrics.register_counter(
-    "in_flight_requests_ru_stime_seconds", labels=["method", "servlet"],
+in_flight_requests_ru_stime = Counter(
+    "synapse_http_server_in_flight_requests_ru_stime_seconds",
+    "",
+    ["method", "servlet"],
 )
 )
 
 
-in_flight_requests_db_txn_count = metrics.register_counter(
-    "in_flight_requests_db_txn_count", labels=["method", "servlet"],
+in_flight_requests_db_txn_count = Counter(
+    "synapse_http_server_in_flight_requests_db_txn_count", "", ["method", "servlet"]
 )
 )
 
 
 # seconds spent waiting for db txns, excluding scheduling time, when processing
 # seconds spent waiting for db txns, excluding scheduling time, when processing
 # this request
 # this request
-in_flight_requests_db_txn_duration = metrics.register_counter(
-    "in_flight_requests_db_txn_duration_seconds", labels=["method", "servlet"],
+in_flight_requests_db_txn_duration = Counter(
+    "synapse_http_server_in_flight_requests_db_txn_duration_seconds",
+    "",
+    ["method", "servlet"],
 )
 )
 
 
 # seconds spent waiting for a db connection, when processing this request
 # seconds spent waiting for a db connection, when processing this request
-in_flight_requests_db_sched_duration = metrics.register_counter(
-    "in_flight_requests_db_sched_duration_seconds", labels=["method", "servlet"]
+in_flight_requests_db_sched_duration = Counter(
+    "synapse_http_server_in_flight_requests_db_sched_duration_seconds",
+    "",
+    ["method", "servlet"],
 )
 )
 
 
-
 # The set of all in flight requests, set[RequestMetrics]
 # The set of all in flight requests, set[RequestMetrics]
 _in_flight_requests = set()
 _in_flight_requests = set()
 
 
 
 
-def _collect_in_flight():
-    """Called just before metrics are collected, so we use it to update all
-    the in flight request metrics
-    """
-
-    for rm in _in_flight_requests:
-        rm.update_metrics()
-
-
-metrics.register_collector(_collect_in_flight)
-
-
 def _get_in_flight_counts():
 def _get_in_flight_counts():
     """Returns a count of all in flight requests by (method, server_name)
     """Returns a count of all in flight requests by (method, server_name)
 
 
     Returns:
     Returns:
         dict[tuple[str, str], int]
         dict[tuple[str, str], int]
     """
     """
+    for rm in _in_flight_requests:
+        rm.update_metrics()
 
 
     # Map from (method, name) -> int, the number of in flight requests of that
     # Map from (method, name) -> int, the number of in flight requests of that
     # type
     # type
@@ -158,16 +130,17 @@ def _get_in_flight_counts():
     return counts
     return counts
 
 
 
 
-metrics.register_callback(
-    "in_flight_requests_count",
+LaterGauge(
+    "synapse_http_request_metrics_in_flight_requests_count",
+    "",
+    ["method", "servlet"],
     _get_in_flight_counts,
     _get_in_flight_counts,
-    labels=["method", "servlet"]
 )
 )
 
 
 
 
 class RequestMetrics(object):
 class RequestMetrics(object):
-    def start(self, time_msec, name, method):
-        self.start = time_msec
+    def start(self, time_sec, name, method):
+        self.start = time_sec
         self.start_context = LoggingContext.current_context()
         self.start_context = LoggingContext.current_context()
         self.name = name
         self.name = name
         self.method = method
         self.method = method
@@ -176,7 +149,7 @@ class RequestMetrics(object):
 
 
         _in_flight_requests.add(self)
         _in_flight_requests.add(self)
 
 
-    def stop(self, time_msec, request):
+    def stop(self, time_sec, request):
         _in_flight_requests.discard(self)
         _in_flight_requests.discard(self)
 
 
         context = LoggingContext.current_context()
         context = LoggingContext.current_context()
@@ -192,34 +165,29 @@ class RequestMetrics(object):
                 )
                 )
                 return
                 return
 
 
-        outgoing_responses_counter.inc(request.method, str(request.code))
+        outgoing_responses_counter.labels(request.method, str(request.code)).inc()
 
 
-        response_count.inc(request.method, self.name, tag)
+        response_count.labels(request.method, self.name, tag).inc()
 
 
-        response_timer.inc_by(
-            time_msec - self.start, request.method,
-            self.name, tag
+        response_timer.labels(request.method, self.name, tag).observe(
+            time_sec - self.start
         )
         )
 
 
         ru_utime, ru_stime = context.get_resource_usage()
         ru_utime, ru_stime = context.get_resource_usage()
 
 
-        response_ru_utime.inc_by(
-            ru_utime, request.method, self.name, tag
-        )
-        response_ru_stime.inc_by(
-            ru_stime, request.method, self.name, tag
-        )
-        response_db_txn_count.inc_by(
-            context.db_txn_count, request.method, self.name, tag
+        response_ru_utime.labels(request.method, self.name, tag).inc(ru_utime)
+        response_ru_stime.labels(request.method, self.name, tag).inc(ru_stime)
+        response_db_txn_count.labels(request.method, self.name, tag).inc(
+            context.db_txn_count
         )
         )
-        response_db_txn_duration.inc_by(
-            context.db_txn_duration_ms / 1000., request.method, self.name, tag
+        response_db_txn_duration.labels(request.method, self.name, tag).inc(
+            context.db_txn_duration_sec
         )
         )
-        response_db_sched_duration.inc_by(
-            context.db_sched_duration_ms / 1000., request.method, self.name, tag
+        response_db_sched_duration.labels(request.method, self.name, tag).inc(
+            context.db_sched_duration_sec
         )
         )
 
 
-        response_size.inc_by(request.sentLength, request.method, self.name, tag)
+        response_size.labels(request.method, self.name, tag).inc(request.sentLength)
 
 
         # We always call this at the end to ensure that we update the metrics
         # We always call this at the end to ensure that we update the metrics
         # regardless of whether a call to /metrics while the request was in
         # regardless of whether a call to /metrics while the request was in
@@ -229,27 +197,21 @@ class RequestMetrics(object):
     def update_metrics(self):
     def update_metrics(self):
         """Updates the in flight metrics with values from this request.
         """Updates the in flight metrics with values from this request.
         """
         """
-
         diff = self._request_stats.update(self.start_context)
         diff = self._request_stats.update(self.start_context)
 
 
-        in_flight_requests_ru_utime.inc_by(
-            diff.ru_utime, self.method, self.name,
-        )
-
-        in_flight_requests_ru_stime.inc_by(
-            diff.ru_stime, self.method, self.name,
-        )
+        in_flight_requests_ru_utime.labels(self.method, self.name).inc(diff.ru_utime)
+        in_flight_requests_ru_stime.labels(self.method, self.name).inc(diff.ru_stime)
 
 
-        in_flight_requests_db_txn_count.inc_by(
-            diff.db_txn_count, self.method, self.name,
+        in_flight_requests_db_txn_count.labels(self.method, self.name).inc(
+            diff.db_txn_count
         )
         )
 
 
-        in_flight_requests_db_txn_duration.inc_by(
-            diff.db_txn_duration_ms / 1000., self.method, self.name,
+        in_flight_requests_db_txn_duration.labels(self.method, self.name).inc(
+            diff.db_txn_duration_sec
         )
         )
 
 
-        in_flight_requests_db_sched_duration.inc_by(
-            diff.db_sched_duration_ms / 1000., self.method, self.name,
+        in_flight_requests_db_sched_duration.labels(self.method, self.name).inc(
+            diff.db_sched_duration_sec
         )
         )
 
 
 
 
@@ -258,17 +220,21 @@ class _RequestStats(object):
     """
     """
 
 
     __slots__ = [
     __slots__ = [
-        "ru_utime", "ru_stime",
-        "db_txn_count", "db_txn_duration_ms", "db_sched_duration_ms",
+        "ru_utime",
+        "ru_stime",
+        "db_txn_count",
+        "db_txn_duration_sec",
+        "db_sched_duration_sec",
     ]
     ]
 
 
-    def __init__(self, ru_utime, ru_stime, db_txn_count,
-                 db_txn_duration_ms, db_sched_duration_ms):
+    def __init__(
+        self, ru_utime, ru_stime, db_txn_count, db_txn_duration_sec, db_sched_duration_sec
+    ):
         self.ru_utime = ru_utime
         self.ru_utime = ru_utime
         self.ru_stime = ru_stime
         self.ru_stime = ru_stime
         self.db_txn_count = db_txn_count
         self.db_txn_count = db_txn_count
-        self.db_txn_duration_ms = db_txn_duration_ms
-        self.db_sched_duration_ms = db_sched_duration_ms
+        self.db_txn_duration_sec = db_txn_duration_sec
+        self.db_sched_duration_sec = db_sched_duration_sec
 
 
     @staticmethod
     @staticmethod
     def from_context(context):
     def from_context(context):
@@ -277,8 +243,8 @@ class _RequestStats(object):
         return _RequestStats(
         return _RequestStats(
             ru_utime, ru_stime,
             ru_utime, ru_stime,
             context.db_txn_count,
             context.db_txn_count,
-            context.db_txn_duration_ms,
-            context.db_sched_duration_ms,
+            context.db_txn_duration_sec,
+            context.db_sched_duration_sec,
         )
         )
 
 
     def update(self, context):
     def update(self, context):
@@ -294,14 +260,14 @@ class _RequestStats(object):
             new.ru_utime - self.ru_utime,
             new.ru_utime - self.ru_utime,
             new.ru_stime - self.ru_stime,
             new.ru_stime - self.ru_stime,
             new.db_txn_count - self.db_txn_count,
             new.db_txn_count - self.db_txn_count,
-            new.db_txn_duration_ms - self.db_txn_duration_ms,
-            new.db_sched_duration_ms - self.db_sched_duration_ms,
+            new.db_txn_duration_sec - self.db_txn_duration_sec,
+            new.db_sched_duration_sec - self.db_sched_duration_sec,
         )
         )
 
 
         self.ru_utime = new.ru_utime
         self.ru_utime = new.ru_utime
         self.ru_stime = new.ru_stime
         self.ru_stime = new.ru_stime
         self.db_txn_count = new.db_txn_count
         self.db_txn_count = new.db_txn_count
-        self.db_txn_duration_ms = new.db_txn_duration_ms
-        self.db_sched_duration_ms = new.db_sched_duration_ms
+        self.db_txn_duration_sec = new.db_txn_duration_sec
+        self.db_sched_duration_sec = new.db_sched_duration_sec
 
 
         return diff
         return diff

+ 2 - 2
synapse/http/server.py

@@ -210,8 +210,8 @@ def wrap_request_handler_with_logging(h):
                         # dispatching to the handler, so that the handler
                         # dispatching to the handler, so that the handler
                         # can update the servlet name in the request
                         # can update the servlet name in the request
                         # metrics
                         # metrics
-                        requests_counter.inc(request.method,
-                                             request.request_metrics.name)
+                        requests_counter.labels(request.method,
+                                                request.request_metrics.name).inc()
                         yield d
                         yield d
     return wrapped_request_handler
     return wrapped_request_handler
 
 

+ 11 - 11
synapse/http/site.py

@@ -56,7 +56,7 @@ class SynapseRequest(Request):
 
 
     def __repr__(self):
     def __repr__(self):
         # We overwrite this so that we don't log ``access_token``
         # We overwrite this so that we don't log ``access_token``
-        return '<%s at 0x%x method=%s uri=%s clientproto=%s site=%s>' % (
+        return '<%s at 0x%x method=%r uri=%r clientproto=%r site=%r>' % (
             self.__class__.__name__,
             self.__class__.__name__,
             id(self),
             id(self),
             self.method,
             self.method,
@@ -83,7 +83,7 @@ class SynapseRequest(Request):
         return Request.render(self, resrc)
         return Request.render(self, resrc)
 
 
     def _started_processing(self, servlet_name):
     def _started_processing(self, servlet_name):
-        self.start_time = int(time.time() * 1000)
+        self.start_time = time.time()
         self.request_metrics = RequestMetrics()
         self.request_metrics = RequestMetrics()
         self.request_metrics.start(
         self.request_metrics.start(
             self.start_time, name=servlet_name, method=self.method,
             self.start_time, name=servlet_name, method=self.method,
@@ -102,26 +102,26 @@ class SynapseRequest(Request):
             context = LoggingContext.current_context()
             context = LoggingContext.current_context()
             ru_utime, ru_stime = context.get_resource_usage()
             ru_utime, ru_stime = context.get_resource_usage()
             db_txn_count = context.db_txn_count
             db_txn_count = context.db_txn_count
-            db_txn_duration_ms = context.db_txn_duration_ms
-            db_sched_duration_ms = context.db_sched_duration_ms
+            db_txn_duration_sec = context.db_txn_duration_sec
+            db_sched_duration_sec = context.db_sched_duration_sec
         except Exception:
         except Exception:
             ru_utime, ru_stime = (0, 0)
             ru_utime, ru_stime = (0, 0)
-            db_txn_count, db_txn_duration_ms = (0, 0)
+            db_txn_count, db_txn_duration_sec = (0, 0)
 
 
-        end_time = int(time.time() * 1000)
+        end_time = time.time()
 
 
         self.site.access_logger.info(
         self.site.access_logger.info(
             "%s - %s - {%s}"
             "%s - %s - {%s}"
-            " Processed request: %dms (%dms, %dms) (%dms/%dms/%d)"
+            " Processed request: %.3fsec (%.3fsec, %.3fsec) (%.3fsec/%.3fsec/%d)"
             " %sB %s \"%s %s %s\" \"%s\"",
             " %sB %s \"%s %s %s\" \"%s\"",
             self.getClientIP(),
             self.getClientIP(),
             self.site.site_tag,
             self.site.site_tag,
             self.authenticated_entity,
             self.authenticated_entity,
             end_time - self.start_time,
             end_time - self.start_time,
-            int(ru_utime * 1000),
-            int(ru_stime * 1000),
-            db_sched_duration_ms,
-            db_txn_duration_ms,
+            ru_utime,
+            ru_stime,
+            db_sched_duration_sec,
+            db_txn_duration_sec,
             int(db_txn_count),
             int(db_txn_count),
             self.sentLength,
             self.sentLength,
             self.code,
             self.code,

+ 127 - 114
synapse/metrics/__init__.py

@@ -17,165 +17,178 @@ import logging
 import functools
 import functools
 import time
 import time
 import gc
 import gc
+import os
 import platform
 import platform
+import attr
 
 
-from twisted.internet import reactor
+from prometheus_client import Gauge, Histogram, Counter
+from prometheus_client.core import GaugeMetricFamily, REGISTRY
 
 
-from .metric import (
-    CounterMetric, CallbackMetric, DistributionMetric, CacheMetric,
-    MemoryUsageMetric, GaugeMetric,
-)
-from .process_collector import register_process_collector
+from twisted.internet import reactor
 
 
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
-
-running_on_pypy = platform.python_implementation() == 'PyPy'
+running_on_pypy = platform.python_implementation() == "PyPy"
 all_metrics = []
 all_metrics = []
 all_collectors = []
 all_collectors = []
+all_gauges = {}
+
+HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
+
+
+class RegistryProxy(object):
+
+    @staticmethod
+    def collect():
+        for metric in REGISTRY.collect():
+            if not metric.name.startswith("__"):
+                yield metric
 
 
 
 
-class Metrics(object):
-    """ A single Metrics object gives a (mutable) slice view of the all_metrics
-    dict, allowing callers to easily register new metrics that are namespaced
-    nicely."""
+@attr.s(hash=True)
+class LaterGauge(object):
 
 
-    def __init__(self, name):
-        self.name_prefix = name
+    name = attr.ib()
+    desc = attr.ib()
+    labels = attr.ib(hash=False)
+    caller = attr.ib()
 
 
-    def make_subspace(self, name):
-        return Metrics("%s_%s" % (self.name_prefix, name))
+    def collect(self):
 
 
-    def register_collector(self, func):
-        all_collectors.append(func)
+        g = GaugeMetricFamily(self.name, self.desc, labels=self.labels)
 
 
-    def _register(self, metric_class, name, *args, **kwargs):
-        full_name = "%s_%s" % (self.name_prefix, name)
+        try:
+            calls = self.caller()
+        except Exception:
+            logger.exception(
+                "Exception running callback for LaterGuage(%s)",
+                self.name,
+            )
+            yield g
+            return
 
 
-        metric = metric_class(full_name, *args, **kwargs)
+        if isinstance(calls, dict):
+            for k, v in calls.items():
+                g.add_metric(k, v)
+        else:
+            g.add_metric([], calls)
 
 
-        all_metrics.append(metric)
-        return metric
+        yield g
 
 
-    def register_counter(self, *args, **kwargs):
-        """
-        Returns:
-            CounterMetric
-        """
-        return self._register(CounterMetric, *args, **kwargs)
+    def __attrs_post_init__(self):
+        self._register()
 
 
-    def register_gauge(self, *args, **kwargs):
-        """
-        Returns:
-            GaugeMetric
-        """
-        return self._register(GaugeMetric, *args, **kwargs)
+    def _register(self):
+        if self.name in all_gauges.keys():
+            logger.warning("%s already registered, reregistering" % (self.name,))
+            REGISTRY.unregister(all_gauges.pop(self.name))
 
 
-    def register_callback(self, *args, **kwargs):
-        """
-        Returns:
-            CallbackMetric
-        """
-        return self._register(CallbackMetric, *args, **kwargs)
+        REGISTRY.register(self)
+        all_gauges[self.name] = self
 
 
-    def register_distribution(self, *args, **kwargs):
-        """
-        Returns:
-            DistributionMetric
-        """
-        return self._register(DistributionMetric, *args, **kwargs)
 
 
-    def register_cache(self, *args, **kwargs):
-        """
-        Returns:
-            CacheMetric
-        """
-        return self._register(CacheMetric, *args, **kwargs)
+#
+# Detailed CPU metrics
+#
+
+class CPUMetrics(object):
 
 
+    def __init__(self):
+        ticks_per_sec = 100
+        try:
+            # Try and get the system config
+            ticks_per_sec = os.sysconf('SC_CLK_TCK')
+        except (ValueError, TypeError, AttributeError):
+            pass
 
 
-def register_memory_metrics(hs):
-    try:
-        import psutil
-        process = psutil.Process()
-        process.memory_info().rss
-    except (ImportError, AttributeError):
-        logger.warn(
-            "psutil is not installed or incorrect version."
-            " Disabling memory metrics."
-        )
-        return
-    metric = MemoryUsageMetric(hs, psutil)
-    all_metrics.append(metric)
+        self.ticks_per_sec = ticks_per_sec
 
 
+    def collect(self):
+        if not HAVE_PROC_SELF_STAT:
+            return
 
 
-def get_metrics_for(pkg_name):
-    """ Returns a Metrics instance for conveniently creating metrics
-    namespaced with the given name prefix. """
+        with open("/proc/self/stat") as s:
+            line = s.read()
+            raw_stats = line.split(") ", 1)[1].split(" ")
 
 
-    # Convert a "package.name" to "package_name" because Prometheus doesn't
-    # let us use . in metric names
-    return Metrics(pkg_name.replace(".", "_"))
+            user = GaugeMetricFamily("process_cpu_user_seconds_total", "")
+            user.add_metric([], float(raw_stats[11]) / self.ticks_per_sec)
+            yield user
 
 
+            sys = GaugeMetricFamily("process_cpu_system_seconds_total", "")
+            sys.add_metric([], float(raw_stats[12]) / self.ticks_per_sec)
+            yield sys
 
 
-def render_all():
-    strs = []
 
 
-    for collector in all_collectors:
-        collector()
+REGISTRY.register(CPUMetrics())
 
 
-    for metric in all_metrics:
-        try:
-            strs += metric.render()
-        except Exception:
-            strs += ["# FAILED to render"]
-            logger.exception("Failed to render metric")
+#
+# Python GC metrics
+#
+
+gc_unreachable = Gauge("python_gc_unreachable_total", "Unreachable GC objects", ["gen"])
+gc_time = Histogram(
+    "python_gc_time",
+    "Time taken to GC (sec)",
+    ["gen"],
+    buckets=[0.0025, 0.005, 0.01, 0.025, 0.05, 0.10, 0.25, 0.50, 1.00, 2.50,
+             5.00, 7.50, 15.00, 30.00, 45.00, 60.00],
+)
 
 
-    strs.append("")  # to generate a final CRLF
 
 
-    return "\n".join(strs)
+class GCCounts(object):
 
 
+    def collect(self):
+        cm = GaugeMetricFamily("python_gc_counts", "GC cycle counts", labels=["gen"])
+        for n, m in enumerate(gc.get_count()):
+            cm.add_metric([str(n)], m)
 
 
-register_process_collector(get_metrics_for("process"))
+        yield cm
 
 
 
 
-python_metrics = get_metrics_for("python")
+REGISTRY.register(GCCounts())
 
 
-gc_time = python_metrics.register_distribution("gc_time", labels=["gen"])
-gc_unreachable = python_metrics.register_counter("gc_unreachable_total", labels=["gen"])
-python_metrics.register_callback(
-    "gc_counts", lambda: {(i,): v for i, v in enumerate(gc.get_count())}, labels=["gen"]
+#
+# Twisted reactor metrics
+#
+
+tick_time = Histogram(
+    "python_twisted_reactor_tick_time",
+    "Tick time of the Twisted reactor (sec)",
+    buckets=[0.001, 0.002, 0.005, 0.01, 0.025, 0.05, 0.1, 0.2, 0.5, 1, 2, 5],
+)
+pending_calls_metric = Histogram(
+    "python_twisted_reactor_pending_calls",
+    "Pending calls",
+    buckets=[1, 2, 5, 10, 25, 50, 100, 250, 500, 1000],
 )
 )
 
 
-reactor_metrics = get_metrics_for("python.twisted.reactor")
-tick_time = reactor_metrics.register_distribution("tick_time")
-pending_calls_metric = reactor_metrics.register_distribution("pending_calls")
+#
+# Federation Metrics
+#
+
+sent_edus_counter = Counter("synapse_federation_client_sent_edus", "")
+
+sent_transactions_counter = Counter("synapse_federation_client_sent_transactions", "")
 
 
-synapse_metrics = get_metrics_for("synapse")
+events_processed_counter = Counter("synapse_federation_client_events_processed", "")
 
 
 # Used to track where various components have processed in the event stream,
 # Used to track where various components have processed in the event stream,
 # e.g. federation sending, appservice sending, etc.
 # e.g. federation sending, appservice sending, etc.
-event_processing_positions = synapse_metrics.register_gauge(
-    "event_processing_positions", labels=["name"],
-)
+event_processing_positions = Gauge("synapse_event_processing_positions", "", ["name"])
 
 
 # Used to track the current max events stream position
 # Used to track the current max events stream position
-event_persisted_position = synapse_metrics.register_gauge(
-    "event_persisted_position",
-)
+event_persisted_position = Gauge("synapse_event_persisted_position", "")
 
 
 # Used to track the received_ts of the last event processed by various
 # Used to track the received_ts of the last event processed by various
 # components
 # components
-event_processing_last_ts = synapse_metrics.register_gauge(
-    "event_processing_last_ts", labels=["name"],
-)
+event_processing_last_ts = Gauge("synapse_event_processing_last_ts", "", ["name"])
 
 
 # Used to track the lag processing events. This is the time difference
 # Used to track the lag processing events. This is the time difference
 # between the last processed event's received_ts and the time it was
 # between the last processed event's received_ts and the time it was
 # finished being processed.
 # finished being processed.
-event_processing_lag = synapse_metrics.register_gauge(
-    "event_processing_lag", labels=["name"],
-)
+event_processing_lag = Gauge("synapse_event_processing_lag", "", ["name"])
 
 
 
 
 def runUntilCurrentTimer(func):
 def runUntilCurrentTimer(func):
@@ -197,17 +210,17 @@ def runUntilCurrentTimer(func):
             num_pending += 1
             num_pending += 1
 
 
         num_pending += len(reactor.threadCallQueue)
         num_pending += len(reactor.threadCallQueue)
-        start = time.time() * 1000
+        start = time.time()
         ret = func(*args, **kwargs)
         ret = func(*args, **kwargs)
-        end = time.time() * 1000
+        end = time.time()
 
 
         # record the amount of wallclock time spent running pending calls.
         # record the amount of wallclock time spent running pending calls.
         # This is a proxy for the actual amount of time between reactor polls,
         # This is a proxy for the actual amount of time between reactor polls,
         # since about 25% of time is actually spent running things triggered by
         # since about 25% of time is actually spent running things triggered by
         # I/O events, but that is harder to capture without rewriting half the
         # I/O events, but that is harder to capture without rewriting half the
         # reactor.
         # reactor.
-        tick_time.inc_by(end - start)
-        pending_calls_metric.inc_by(num_pending)
+        tick_time.observe(end - start)
+        pending_calls_metric.observe(num_pending)
 
 
         if running_on_pypy:
         if running_on_pypy:
             return ret
             return ret
@@ -220,12 +233,12 @@ def runUntilCurrentTimer(func):
             if threshold[i] < counts[i]:
             if threshold[i] < counts[i]:
                 logger.info("Collecting gc %d", i)
                 logger.info("Collecting gc %d", i)
 
 
-                start = time.time() * 1000
+                start = time.time()
                 unreachable = gc.collect(i)
                 unreachable = gc.collect(i)
-                end = time.time() * 1000
+                end = time.time()
 
 
-                gc_time.inc_by(end - start, i)
-                gc_unreachable.inc_by(unreachable, i)
+                gc_time.labels(i).observe(end - start)
+                gc_unreachable.labels(i).set(unreachable)
 
 
         return ret
         return ret
 
 

+ 0 - 328
synapse/metrics/metric.py

@@ -1,328 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright 2015, 2016 OpenMarket Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-from itertools import chain
-import logging
-import re
-
-logger = logging.getLogger(__name__)
-
-
-def flatten(items):
-    """Flatten a list of lists
-
-    Args:
-        items: iterable[iterable[X]]
-
-    Returns:
-        list[X]: flattened list
-    """
-    return list(chain.from_iterable(items))
-
-
-class BaseMetric(object):
-    """Base class for metrics which report a single value per label set
-    """
-
-    def __init__(self, name, labels=[], alternative_names=[]):
-        """
-        Args:
-            name (str): principal name for this metric
-            labels (list(str)): names of the labels which will be reported
-                for this metric
-            alternative_names (iterable(str)): list of alternative names for
-                 this metric. This can be useful to provide a migration path
-                when renaming metrics.
-        """
-        self._names = [name] + list(alternative_names)
-        self.labels = labels  # OK not to clone as we never write it
-
-    def dimension(self):
-        return len(self.labels)
-
-    def is_scalar(self):
-        return not len(self.labels)
-
-    def _render_labelvalue(self, value):
-        return '"%s"' % (_escape_label_value(value),)
-
-    def _render_key(self, values):
-        if self.is_scalar():
-            return ""
-        return "{%s}" % (
-            ",".join(["%s=%s" % (k, self._render_labelvalue(v))
-                      for k, v in zip(self.labels, values)])
-        )
-
-    def _render_for_labels(self, label_values, value):
-        """Render this metric for a single set of labels
-
-        Args:
-            label_values (list[object]): values for each of the labels,
-                (which get stringified).
-            value: value of the metric at with these labels
-
-        Returns:
-            iterable[str]: rendered metric
-        """
-        rendered_labels = self._render_key(label_values)
-        return (
-            "%s%s %.12g" % (name, rendered_labels, value)
-            for name in self._names
-        )
-
-    def render(self):
-        """Render this metric
-
-        Each metric is rendered as:
-
-            name{label1="val1",label2="val2"} value
-
-        https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-details
-
-        Returns:
-            iterable[str]: rendered metrics
-        """
-        raise NotImplementedError()
-
-
-class CounterMetric(BaseMetric):
-    """The simplest kind of metric; one that stores a monotonically-increasing
-    value that counts events or running totals.
-
-    Example use cases for Counters:
-    - Number of requests processed
-    - Number of items that were inserted into a queue
-    - Total amount of data that a system has processed
-    Counters can only go up (and be reset when the process restarts).
-    """
-
-    def __init__(self, *args, **kwargs):
-        super(CounterMetric, self).__init__(*args, **kwargs)
-
-        # dict[list[str]]: value for each set of label values. the keys are the
-        # label values, in the same order as the labels in self.labels.
-        #
-        # (if the metric is a scalar, the (single) key is the empty tuple).
-        self.counts = {}
-
-        # Scalar metrics are never empty
-        if self.is_scalar():
-            self.counts[()] = 0.
-
-    def inc_by(self, incr, *values):
-        if len(values) != self.dimension():
-            raise ValueError(
-                "Expected as many values to inc() as labels (%d)" % (self.dimension())
-            )
-
-        # TODO: should assert that the tag values are all strings
-
-        if values not in self.counts:
-            self.counts[values] = incr
-        else:
-            self.counts[values] += incr
-
-    def inc(self, *values):
-        self.inc_by(1, *values)
-
-    def render(self):
-        return flatten(
-            self._render_for_labels(k, self.counts[k])
-            for k in sorted(self.counts.keys())
-        )
-
-
-class GaugeMetric(BaseMetric):
-    """A metric that can go up or down
-    """
-
-    def __init__(self, *args, **kwargs):
-        super(GaugeMetric, self).__init__(*args, **kwargs)
-
-        # dict[list[str]]: value for each set of label values. the keys are the
-        # label values, in the same order as the labels in self.labels.
-        #
-        # (if the metric is a scalar, the (single) key is the empty tuple).
-        self.guages = {}
-
-    def set(self, v, *values):
-        if len(values) != self.dimension():
-            raise ValueError(
-                "Expected as many values to inc() as labels (%d)" % (self.dimension())
-            )
-
-        # TODO: should assert that the tag values are all strings
-
-        self.guages[values] = v
-
-    def render(self):
-        return flatten(
-            self._render_for_labels(k, self.guages[k])
-            for k in sorted(self.guages.keys())
-        )
-
-
-class CallbackMetric(BaseMetric):
-    """A metric that returns the numeric value returned by a callback whenever
-    it is rendered. Typically this is used to implement gauges that yield the
-    size or other state of some in-memory object by actively querying it."""
-
-    def __init__(self, name, callback, labels=[]):
-        super(CallbackMetric, self).__init__(name, labels=labels)
-
-        self.callback = callback
-
-    def render(self):
-        try:
-            value = self.callback()
-        except Exception:
-            logger.exception("Failed to render %s", self.name)
-            return ["# FAILED to render " + self.name]
-
-        if self.is_scalar():
-            return list(self._render_for_labels([], value))
-
-        return flatten(
-            self._render_for_labels(k, value[k])
-            for k in sorted(value.keys())
-        )
-
-
-class DistributionMetric(object):
-    """A combination of an event counter and an accumulator, which counts
-    both the number of events and accumulates the total value. Typically this
-    could be used to keep track of method-running times, or other distributions
-    of values that occur in discrete occurances.
-
-    TODO(paul): Try to export some heatmap-style stats?
-    """
-
-    def __init__(self, name, *args, **kwargs):
-        self.counts = CounterMetric(name + ":count", **kwargs)
-        self.totals = CounterMetric(name + ":total", **kwargs)
-
-    def inc_by(self, inc, *values):
-        self.counts.inc(*values)
-        self.totals.inc_by(inc, *values)
-
-    def render(self):
-        return self.counts.render() + self.totals.render()
-
-
-class CacheMetric(object):
-    __slots__ = (
-        "name", "cache_name", "hits", "misses", "evicted_size", "size_callback",
-    )
-
-    def __init__(self, name, size_callback, cache_name):
-        self.name = name
-        self.cache_name = cache_name
-
-        self.hits = 0
-        self.misses = 0
-        self.evicted_size = 0
-
-        self.size_callback = size_callback
-
-    def inc_hits(self):
-        self.hits += 1
-
-    def inc_misses(self):
-        self.misses += 1
-
-    def inc_evictions(self, size=1):
-        self.evicted_size += size
-
-    def render(self):
-        size = self.size_callback()
-        hits = self.hits
-        total = self.misses + self.hits
-
-        return [
-            """%s:hits{name="%s"} %d""" % (self.name, self.cache_name, hits),
-            """%s:total{name="%s"} %d""" % (self.name, self.cache_name, total),
-            """%s:size{name="%s"} %d""" % (self.name, self.cache_name, size),
-            """%s:evicted_size{name="%s"} %d""" % (
-                self.name, self.cache_name, self.evicted_size
-            ),
-        ]
-
-
-class MemoryUsageMetric(object):
-    """Keeps track of the current memory usage, using psutil.
-
-    The class will keep the current min/max/sum/counts of rss over the last
-    WINDOW_SIZE_SEC, by polling UPDATE_HZ times per second
-    """
-
-    UPDATE_HZ = 2  # number of times to get memory per second
-    WINDOW_SIZE_SEC = 30  # the size of the window in seconds
-
-    def __init__(self, hs, psutil):
-        clock = hs.get_clock()
-        self.memory_snapshots = []
-
-        self.process = psutil.Process()
-
-        clock.looping_call(self._update_curr_values, 1000 / self.UPDATE_HZ)
-
-    def _update_curr_values(self):
-        max_size = self.UPDATE_HZ * self.WINDOW_SIZE_SEC
-        self.memory_snapshots.append(self.process.memory_info().rss)
-        self.memory_snapshots[:] = self.memory_snapshots[-max_size:]
-
-    def render(self):
-        if not self.memory_snapshots:
-            return []
-
-        max_rss = max(self.memory_snapshots)
-        min_rss = min(self.memory_snapshots)
-        sum_rss = sum(self.memory_snapshots)
-        len_rss = len(self.memory_snapshots)
-
-        return [
-            "process_psutil_rss:max %d" % max_rss,
-            "process_psutil_rss:min %d" % min_rss,
-            "process_psutil_rss:total %d" % sum_rss,
-            "process_psutil_rss:count %d" % len_rss,
-        ]
-
-
-def _escape_character(m):
-    """Replaces a single character with its escape sequence.
-
-    Args:
-        m (re.MatchObject): A match object whose first group is the single
-            character to replace
-
-    Returns:
-        str
-    """
-    c = m.group(1)
-    if c == "\\":
-        return "\\\\"
-    elif c == "\"":
-        return "\\\""
-    elif c == "\n":
-        return "\\n"
-    return c
-
-
-def _escape_label_value(value):
-    """Takes a label value and escapes quotes, newlines and backslashes
-    """
-    return re.sub(r"([\n\"\\])", _escape_character, str(value))

+ 0 - 122
synapse/metrics/process_collector.py

@@ -1,122 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright 2015, 2016 OpenMarket Ltd
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import os
-
-
-TICKS_PER_SEC = 100
-BYTES_PER_PAGE = 4096
-
-HAVE_PROC_STAT = os.path.exists("/proc/stat")
-HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
-HAVE_PROC_SELF_LIMITS = os.path.exists("/proc/self/limits")
-HAVE_PROC_SELF_FD = os.path.exists("/proc/self/fd")
-
-# Field indexes from /proc/self/stat, taken from the proc(5) manpage
-STAT_FIELDS = {
-    "utime": 14,
-    "stime": 15,
-    "starttime": 22,
-    "vsize": 23,
-    "rss": 24,
-}
-
-
-stats = {}
-
-# In order to report process_start_time_seconds we need to know the
-# machine's boot time, because the value in /proc/self/stat is relative to
-# this
-boot_time = None
-if HAVE_PROC_STAT:
-    with open("/proc/stat") as _procstat:
-        for line in _procstat:
-            if line.startswith("btime "):
-                boot_time = int(line.split()[1])
-
-
-def update_resource_metrics():
-    if HAVE_PROC_SELF_STAT:
-        global stats
-        with open("/proc/self/stat") as s:
-            line = s.read()
-            # line is PID (command) more stats go here ...
-            raw_stats = line.split(") ", 1)[1].split(" ")
-
-            for (name, index) in STAT_FIELDS.iteritems():
-                # subtract 3 from the index, because proc(5) is 1-based, and
-                # we've lost the first two fields in PID and COMMAND above
-                stats[name] = int(raw_stats[index - 3])
-
-
-def _count_fds():
-    # Not every OS will have a /proc/self/fd directory
-    if not HAVE_PROC_SELF_FD:
-        return 0
-
-    return len(os.listdir("/proc/self/fd"))
-
-
-def register_process_collector(process_metrics):
-    process_metrics.register_collector(update_resource_metrics)
-
-    if HAVE_PROC_SELF_STAT:
-        process_metrics.register_callback(
-            "cpu_user_seconds_total",
-            lambda: float(stats["utime"]) / TICKS_PER_SEC
-        )
-        process_metrics.register_callback(
-            "cpu_system_seconds_total",
-            lambda: float(stats["stime"]) / TICKS_PER_SEC
-        )
-        process_metrics.register_callback(
-            "cpu_seconds_total",
-            lambda: (float(stats["utime"] + stats["stime"])) / TICKS_PER_SEC
-        )
-
-        process_metrics.register_callback(
-            "virtual_memory_bytes",
-            lambda: int(stats["vsize"])
-        )
-        process_metrics.register_callback(
-            "resident_memory_bytes",
-            lambda: int(stats["rss"]) * BYTES_PER_PAGE
-        )
-
-        process_metrics.register_callback(
-            "start_time_seconds",
-            lambda: boot_time + int(stats["starttime"]) / TICKS_PER_SEC
-        )
-
-    if HAVE_PROC_SELF_FD:
-        process_metrics.register_callback(
-            "open_fds",
-            lambda: _count_fds()
-        )
-
-    if HAVE_PROC_SELF_LIMITS:
-        def _get_max_fds():
-            with open("/proc/self/limits") as limits:
-                for line in limits:
-                    if not line.startswith("Max open files "):
-                        continue
-                    # Line is  Max open files  $SOFT  $HARD
-                    return int(line.split()[3])
-            return None
-
-        process_metrics.register_callback(
-            "max_fds",
-            lambda: _get_max_fds()
-        )

+ 2 - 21
synapse/metrics/resource.py

@@ -13,27 +13,8 @@
 # See the License for the specific language governing permissions and
 # See the License for the specific language governing permissions and
 # limitations under the License.
 # limitations under the License.
 
 
-from twisted.web.resource import Resource
-
-import synapse.metrics
-
+from prometheus_client.twisted import MetricsResource
 
 
 METRICS_PREFIX = "/_synapse/metrics"
 METRICS_PREFIX = "/_synapse/metrics"
 
 
-
-class MetricsResource(Resource):
-    isLeaf = True
-
-    def __init__(self, hs):
-        Resource.__init__(self)  # Resource is old-style, so no super()
-
-        self.hs = hs
-
-    def render_GET(self, request):
-        response = synapse.metrics.render_all()
-
-        request.setHeader("Content-Type", "text/plain")
-        request.setHeader("Content-Length", str(len(response)))
-
-        # Encode as UTF-8 (default)
-        return response.encode()
+__all__ = ["MetricsResource", "METRICS_PREFIX"]

+ 11 - 13
synapse/notifier.py

@@ -28,22 +28,20 @@ from synapse.util.logcontext import PreserveLoggingContext, run_in_background
 from synapse.util.metrics import Measure
 from synapse.util.metrics import Measure
 from synapse.types import StreamToken
 from synapse.types import StreamToken
 from synapse.visibility import filter_events_for_client
 from synapse.visibility import filter_events_for_client
-import synapse.metrics
+from synapse.metrics import LaterGauge
 
 
 from collections import namedtuple
 from collections import namedtuple
+from prometheus_client import Counter
 
 
 import logging
 import logging
 
 
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
+notified_events_counter = Counter("synapse_notifier_notified_events", "")
 
 
-notified_events_counter = metrics.register_counter("notified_events")
-
-users_woken_by_stream_counter = metrics.register_counter(
-    "users_woken_by_stream", labels=["stream"]
-)
+users_woken_by_stream_counter = Counter(
+    "synapse_notifier_users_woken_by_stream", "", ["stream"])
 
 
 
 
 # TODO(paul): Should be shared somewhere
 # TODO(paul): Should be shared somewhere
@@ -108,7 +106,7 @@ class _NotifierUserStream(object):
         self.last_notified_ms = time_now_ms
         self.last_notified_ms = time_now_ms
         noify_deferred = self.notify_deferred
         noify_deferred = self.notify_deferred
 
 
-        users_woken_by_stream_counter.inc(stream_key)
+        users_woken_by_stream_counter.labels(stream_key).inc()
 
 
         with PreserveLoggingContext():
         with PreserveLoggingContext():
             self.notify_deferred = ObservableDeferred(defer.Deferred())
             self.notify_deferred = ObservableDeferred(defer.Deferred())
@@ -197,14 +195,14 @@ class Notifier(object):
                 all_user_streams.add(x)
                 all_user_streams.add(x)
 
 
             return sum(stream.count_listeners() for stream in all_user_streams)
             return sum(stream.count_listeners() for stream in all_user_streams)
-        metrics.register_callback("listeners", count_listeners)
+        LaterGauge("synapse_notifier_listeners", "", [], count_listeners)
 
 
-        metrics.register_callback(
-            "rooms",
+        LaterGauge(
+            "synapse_notifier_rooms", "", [],
             lambda: count(bool, self.room_to_user_streams.values()),
             lambda: count(bool, self.room_to_user_streams.values()),
         )
         )
-        metrics.register_callback(
-            "users",
+        LaterGauge(
+            "synapse_notifier_users", "", [],
             lambda: len(self.user_to_user_stream),
             lambda: len(self.user_to_user_stream),
         )
         )
 
 

+ 1 - 1
synapse/push/baserules.py

@@ -39,7 +39,7 @@ def list_with_base_rules(rawrules):
     rawrules = [r for r in rawrules if r['priority_class'] >= 0]
     rawrules = [r for r in rawrules if r['priority_class'] >= 0]
 
 
     # shove the server default rules for each kind onto the end of each
     # shove the server default rules for each kind onto the end of each
-    current_prio_class = PRIORITY_CLASS_INVERSE_MAP.keys()[-1]
+    current_prio_class = list(PRIORITY_CLASS_INVERSE_MAP)[-1]
 
 
     ruleslist.extend(make_base_prepend_rules(
     ruleslist.extend(make_base_prepend_rules(
         PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
         PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules

+ 20 - 23
synapse/push/bulk_push_rule_evaluator.py

@@ -22,35 +22,32 @@ from .push_rule_evaluator import PushRuleEvaluatorForEvent
 
 
 from synapse.event_auth import get_user_power_level
 from synapse.event_auth import get_user_power_level
 from synapse.api.constants import EventTypes, Membership
 from synapse.api.constants import EventTypes, Membership
-from synapse.metrics import get_metrics_for
-from synapse.util.caches import metrics as cache_metrics
+from synapse.util.caches import register_cache
 from synapse.util.caches.descriptors import cached
 from synapse.util.caches.descriptors import cached
 from synapse.util.async import Linearizer
 from synapse.util.async import Linearizer
 from synapse.state import POWER_KEY
 from synapse.state import POWER_KEY
 
 
 from collections import namedtuple
 from collections import namedtuple
-
+from prometheus_client import Counter
+from six import itervalues, iteritems
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
 rules_by_room = {}
 rules_by_room = {}
 
 
-push_metrics = get_metrics_for(__name__)
 
 
-push_rules_invalidation_counter = push_metrics.register_counter(
-    "push_rules_invalidation_counter"
-)
-push_rules_state_size_counter = push_metrics.register_counter(
-    "push_rules_state_size_counter"
-)
+push_rules_invalidation_counter = Counter(
+    "synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter", "")
+push_rules_state_size_counter = Counter(
+    "synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter", "")
 
 
 # Measures whether we use the fast path of using state deltas, or if we have to
 # Measures whether we use the fast path of using state deltas, or if we have to
 # recalculate from scratch
 # recalculate from scratch
-push_rules_delta_state_cache_metric = cache_metrics.register_cache(
+push_rules_delta_state_cache_metric = register_cache(
     "cache",
     "cache",
-    size_callback=lambda: 0,  # Meaningless size, as this isn't a cache that stores values
-    cache_name="push_rules_delta_state_cache_metric",
+    "push_rules_delta_state_cache_metric",
+    cache=[],  # Meaningless size, as this isn't a cache that stores values
 )
 )
 
 
 
 
@@ -64,10 +61,10 @@ class BulkPushRuleEvaluator(object):
         self.store = hs.get_datastore()
         self.store = hs.get_datastore()
         self.auth = hs.get_auth()
         self.auth = hs.get_auth()
 
 
-        self.room_push_rule_cache_metrics = cache_metrics.register_cache(
+        self.room_push_rule_cache_metrics = register_cache(
             "cache",
             "cache",
-            size_callback=lambda: 0,  # There's not good value for this
-            cache_name="room_push_rule_cache",
+            "room_push_rule_cache",
+            cache=[],  # Meaningless size, as this isn't a cache that stores values
         )
         )
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
@@ -126,7 +123,7 @@ class BulkPushRuleEvaluator(object):
             )
             )
             auth_events = yield self.store.get_events(auth_events_ids)
             auth_events = yield self.store.get_events(auth_events_ids)
             auth_events = {
             auth_events = {
-                (e.type, e.state_key): e for e in auth_events.itervalues()
+                (e.type, e.state_key): e for e in itervalues(auth_events)
             }
             }
 
 
         sender_level = get_user_power_level(event.sender, auth_events)
         sender_level = get_user_power_level(event.sender, auth_events)
@@ -160,7 +157,7 @@ class BulkPushRuleEvaluator(object):
 
 
         condition_cache = {}
         condition_cache = {}
 
 
-        for uid, rules in rules_by_user.iteritems():
+        for uid, rules in iteritems(rules_by_user):
             if event.sender == uid:
             if event.sender == uid:
                 continue
                 continue
 
 
@@ -309,7 +306,7 @@ class RulesForRoom(object):
                 current_state_ids = context.current_state_ids
                 current_state_ids = context.current_state_ids
                 push_rules_delta_state_cache_metric.inc_misses()
                 push_rules_delta_state_cache_metric.inc_misses()
 
 
-            push_rules_state_size_counter.inc_by(len(current_state_ids))
+            push_rules_state_size_counter.inc(len(current_state_ids))
 
 
             logger.debug(
             logger.debug(
                 "Looking for member changes in %r %r", state_group, current_state_ids
                 "Looking for member changes in %r %r", state_group, current_state_ids
@@ -406,7 +403,7 @@ class RulesForRoom(object):
         # If the event is a join event then it will be in current state evnts
         # If the event is a join event then it will be in current state evnts
         # map but not in the DB, so we have to explicitly insert it.
         # map but not in the DB, so we have to explicitly insert it.
         if event.type == EventTypes.Member:
         if event.type == EventTypes.Member:
-            for event_id in member_event_ids.itervalues():
+            for event_id in itervalues(member_event_ids):
                 if event_id == event.event_id:
                 if event_id == event.event_id:
                     members[event_id] = (event.state_key, event.membership)
                     members[event_id] = (event.state_key, event.membership)
 
 
@@ -414,7 +411,7 @@ class RulesForRoom(object):
             logger.debug("Found members %r: %r", self.room_id, members.values())
             logger.debug("Found members %r: %r", self.room_id, members.values())
 
 
         interested_in_user_ids = set(
         interested_in_user_ids = set(
-            user_id for user_id, membership in members.itervalues()
+            user_id for user_id, membership in itervalues(members)
             if membership == Membership.JOIN
             if membership == Membership.JOIN
         )
         )
 
 
@@ -426,7 +423,7 @@ class RulesForRoom(object):
         )
         )
 
 
         user_ids = set(
         user_ids = set(
-            uid for uid, have_pusher in if_users_with_pushers.iteritems() if have_pusher
+            uid for uid, have_pusher in iteritems(if_users_with_pushers) if have_pusher
         )
         )
 
 
         logger.debug("With pushers: %r", user_ids)
         logger.debug("With pushers: %r", user_ids)
@@ -447,7 +444,7 @@ class RulesForRoom(object):
         )
         )
 
 
         ret_rules_by_user.update(
         ret_rules_by_user.update(
-            item for item in rules_by_user.iteritems() if item[0] is not None
+            item for item in iteritems(rules_by_user) if item[0] is not None
         )
         )
 
 
         self.update_cache(sequence, members, ret_rules_by_user, state_group)
         self.update_cache(sequence, members, ret_rules_by_user, state_group)

+ 4 - 9
synapse/push/httppusher.py

@@ -20,22 +20,17 @@ from twisted.internet.error import AlreadyCalled, AlreadyCancelled
 
 
 from . import push_rule_evaluator
 from . import push_rule_evaluator
 from . import push_tools
 from . import push_tools
-import synapse
 from synapse.push import PusherConfigException
 from synapse.push import PusherConfigException
 from synapse.util.logcontext import LoggingContext
 from synapse.util.logcontext import LoggingContext
 from synapse.util.metrics import Measure
 from synapse.util.metrics import Measure
 
 
-logger = logging.getLogger(__name__)
+from prometheus_client import Counter
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
+logger = logging.getLogger(__name__)
 
 
-http_push_processed_counter = metrics.register_counter(
-    "http_pushes_processed",
-)
+http_push_processed_counter = Counter("synapse_http_httppusher_http_pushes_processed", "")
 
 
-http_push_failed_counter = metrics.register_counter(
-    "http_pushes_failed",
-)
+http_push_failed_counter = Counter("synapse_http_httppusher_http_pushes_failed", "")
 
 
 
 
 class HttpPusher(object):
 class HttpPusher(object):

+ 2 - 1
synapse/push/mailer.py

@@ -229,7 +229,8 @@ class Mailer(object):
                 if room_vars['notifs'] and 'messages' in room_vars['notifs'][-1]:
                 if room_vars['notifs'] and 'messages' in room_vars['notifs'][-1]:
                     prev_messages = room_vars['notifs'][-1]['messages']
                     prev_messages = room_vars['notifs'][-1]['messages']
                     for message in notifvars['messages']:
                     for message in notifvars['messages']:
-                        pm = filter(lambda pm: pm['id'] == message['id'], prev_messages)
+                        pm = list(filter(lambda pm: pm['id'] == message['id'],
+                                         prev_messages))
                         if pm:
                         if pm:
                             if not message["is_historical"]:
                             if not message["is_historical"]:
                                 pm[0]["is_historical"] = False
                                 pm[0]["is_historical"] = False

+ 1 - 1
synapse/push/presentable_names.py

@@ -113,7 +113,7 @@ def calculate_room_name(store, room_state_ids, user_id, fallback_to_members=True
     # so find out who is in the room that isn't the user.
     # so find out who is in the room that isn't the user.
     if "m.room.member" in room_state_bytype_ids:
     if "m.room.member" in room_state_bytype_ids:
         member_events = yield store.get_events(
         member_events = yield store.get_events(
-            room_state_bytype_ids["m.room.member"].values()
+            list(room_state_bytype_ids["m.room.member"].values())
         )
         )
         all_members = [
         all_members = [
             ev for ev in member_events.values()
             ev for ev in member_events.values()

+ 4 - 2
synapse/push/push_rule_evaluator.py

@@ -21,6 +21,8 @@ from synapse.types import UserID
 from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
 from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
 from synapse.util.caches.lrucache import LruCache
 from synapse.util.caches.lrucache import LruCache
 
 
+from six import string_types
+
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -150,7 +152,7 @@ class PushRuleEvaluatorForEvent(object):
 
 
 # Caches (glob, word_boundary) -> regex for push. See _glob_matches
 # Caches (glob, word_boundary) -> regex for push. See _glob_matches
 regex_cache = LruCache(50000 * CACHE_SIZE_FACTOR)
 regex_cache = LruCache(50000 * CACHE_SIZE_FACTOR)
-register_cache("regex_push_cache", regex_cache)
+register_cache("cache", "regex_push_cache", regex_cache)
 
 
 
 
 def _glob_matches(glob, value, word_boundary=False):
 def _glob_matches(glob, value, word_boundary=False):
@@ -238,7 +240,7 @@ def _flatten_dict(d, prefix=[], result=None):
     if result is None:
     if result is None:
         result = {}
         result = {}
     for key, value in d.items():
     for key, value in d.items():
-        if isinstance(value, basestring):
+        if isinstance(value, string_types):
             result[".".join(prefix + [key])] = value.lower()
             result[".".join(prefix + [key])] = value.lower()
         elif hasattr(value, "items"):
         elif hasattr(value, "items"):
             _flatten_dict(value, prefix=(prefix + [key]), result=result)
             _flatten_dict(value, prefix=(prefix + [key]), result=result)

+ 1 - 0
synapse/python_dependencies.py

@@ -56,6 +56,7 @@ REQUIREMENTS = {
     "msgpack-python>=0.3.0": ["msgpack"],
     "msgpack-python>=0.3.0": ["msgpack"],
     "phonenumbers>=8.2.0": ["phonenumbers"],
     "phonenumbers>=8.2.0": ["phonenumbers"],
     "six": ["six"],
     "six": ["six"],
+    "prometheus_client": ["prometheus_client"],
 }
 }
 CONDITIONAL_REQUIREMENTS = {
 CONDITIONAL_REQUIREMENTS = {
     "web_client": {
     "web_client": {

+ 42 - 59
synapse/replication/tcp/protocol.py

@@ -60,21 +60,21 @@ from .commands import (
 )
 )
 from .streams import STREAMS_MAP
 from .streams import STREAMS_MAP
 
 
+from synapse.metrics import LaterGauge
 from synapse.util.stringutils import random_string
 from synapse.util.stringutils import random_string
-from synapse.metrics.metric import CounterMetric
 
 
-import logging
-import synapse.metrics
-import struct
-import fcntl
+from prometheus_client import Counter
 
 
+from collections import defaultdict
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
+from six import iterkeys, iteritems
 
 
-connection_close_counter = metrics.register_counter(
-    "close_reason", labels=["reason_type"],
-)
+import logging
+import struct
+import fcntl
 
 
+connection_close_counter = Counter(
+    "synapse_replication_tcp_protocol_close_reason", "", ["reason_type"])
 
 
 # A list of all connected protocols. This allows us to send metrics about the
 # A list of all connected protocols. This allows us to send metrics about the
 # connections.
 # connections.
@@ -136,12 +136,8 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
         # The LoopingCall for sending pings.
         # The LoopingCall for sending pings.
         self._send_ping_loop = None
         self._send_ping_loop = None
 
 
-        self.inbound_commands_counter = CounterMetric(
-            "inbound_commands", labels=["command"],
-        )
-        self.outbound_commands_counter = CounterMetric(
-            "outbound_commands", labels=["command"],
-        )
+        self.inbound_commands_counter = defaultdict(int)
+        self.outbound_commands_counter = defaultdict(int)
 
 
     def connectionMade(self):
     def connectionMade(self):
         logger.info("[%s] Connection established", self.id())
         logger.info("[%s] Connection established", self.id())
@@ -201,7 +197,8 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
 
 
         self.last_received_command = self.clock.time_msec()
         self.last_received_command = self.clock.time_msec()
 
 
-        self.inbound_commands_counter.inc(cmd_name)
+        self.inbound_commands_counter[cmd_name] = (
+            self.inbound_commands_counter[cmd_name] + 1)
 
 
         cmd_cls = COMMAND_MAP[cmd_name]
         cmd_cls = COMMAND_MAP[cmd_name]
         try:
         try:
@@ -251,8 +248,8 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
             self._queue_command(cmd)
             self._queue_command(cmd)
             return
             return
 
 
-        self.outbound_commands_counter.inc(cmd.NAME)
-
+        self.outbound_commands_counter[cmd.NAME] = (
+            self.outbound_commands_counter[cmd.NAME] + 1)
         string = "%s %s" % (cmd.NAME, cmd.to_line(),)
         string = "%s %s" % (cmd.NAME, cmd.to_line(),)
         if "\n" in string:
         if "\n" in string:
             raise Exception("Unexpected newline in command: %r", string)
             raise Exception("Unexpected newline in command: %r", string)
@@ -317,9 +314,9 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
     def connectionLost(self, reason):
     def connectionLost(self, reason):
         logger.info("[%s] Replication connection closed: %r", self.id(), reason)
         logger.info("[%s] Replication connection closed: %r", self.id(), reason)
         if isinstance(reason, Failure):
         if isinstance(reason, Failure):
-            connection_close_counter.inc(reason.type.__name__)
+            connection_close_counter.labels(reason.type.__name__).inc()
         else:
         else:
-            connection_close_counter.inc(reason.__class__.__name__)
+            connection_close_counter.labels(reason.__class__.__name__).inc()
 
 
         try:
         try:
             # Remove us from list of connections to be monitored
             # Remove us from list of connections to be monitored
@@ -392,7 +389,7 @@ class ServerReplicationStreamProtocol(BaseReplicationStreamProtocol):
 
 
         if stream_name == "ALL":
         if stream_name == "ALL":
             # Subscribe to all streams we're publishing to.
             # Subscribe to all streams we're publishing to.
-            for stream in self.streamer.streams_by_name.iterkeys():
+            for stream in iterkeys(self.streamer.streams_by_name):
                 self.subscribe_to_stream(stream, token)
                 self.subscribe_to_stream(stream, token)
         else:
         else:
             self.subscribe_to_stream(stream_name, token)
             self.subscribe_to_stream(stream_name, token)
@@ -498,7 +495,7 @@ class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol):
         BaseReplicationStreamProtocol.connectionMade(self)
         BaseReplicationStreamProtocol.connectionMade(self)
 
 
         # Once we've connected subscribe to the necessary streams
         # Once we've connected subscribe to the necessary streams
-        for stream_name, token in self.handler.get_streams_to_replicate().iteritems():
+        for stream_name, token in iteritems(self.handler.get_streams_to_replicate()):
             self.replicate(stream_name, token)
             self.replicate(stream_name, token)
 
 
         # Tell the server if we have any users currently syncing (should only
         # Tell the server if we have any users currently syncing (should only
@@ -518,7 +515,7 @@ class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol):
 
 
     def on_RDATA(self, cmd):
     def on_RDATA(self, cmd):
         stream_name = cmd.stream_name
         stream_name = cmd.stream_name
-        inbound_rdata_count.inc(stream_name)
+        inbound_rdata_count.labels(stream_name).inc()
 
 
         try:
         try:
             row = STREAMS_MAP[stream_name].ROW_TYPE(*cmd.row)
             row = STREAMS_MAP[stream_name].ROW_TYPE(*cmd.row)
@@ -566,14 +563,12 @@ class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol):
 
 
 # The following simply registers metrics for the replication connections
 # The following simply registers metrics for the replication connections
 
 
-metrics.register_callback(
-    "pending_commands",
+pending_commands = LaterGauge(
+    "pending_commands", "", ["name", "conn_id"],
     lambda: {
     lambda: {
         (p.name, p.conn_id): len(p.pending_commands)
         (p.name, p.conn_id): len(p.pending_commands)
         for p in connected_connections
         for p in connected_connections
-    },
-    labels=["name", "conn_id"],
-)
+    })
 
 
 
 
 def transport_buffer_size(protocol):
 def transport_buffer_size(protocol):
@@ -583,14 +578,12 @@ def transport_buffer_size(protocol):
     return 0
     return 0
 
 
 
 
-metrics.register_callback(
-    "transport_send_buffer",
+transport_send_buffer = LaterGauge(
+    "synapse_replication_tcp_transport_send_buffer", "", ["name", "conn_id"],
     lambda: {
     lambda: {
         (p.name, p.conn_id): transport_buffer_size(p)
         (p.name, p.conn_id): transport_buffer_size(p)
         for p in connected_connections
         for p in connected_connections
-    },
-    labels=["name", "conn_id"],
-)
+    })
 
 
 
 
 def transport_kernel_read_buffer_size(protocol, read=True):
 def transport_kernel_read_buffer_size(protocol, read=True):
@@ -608,48 +601,38 @@ def transport_kernel_read_buffer_size(protocol, read=True):
     return 0
     return 0
 
 
 
 
-metrics.register_callback(
-    "transport_kernel_send_buffer",
+tcp_transport_kernel_send_buffer = LaterGauge(
+    "synapse_replication_tcp_transport_kernel_send_buffer", "", ["name", "conn_id"],
     lambda: {
     lambda: {
         (p.name, p.conn_id): transport_kernel_read_buffer_size(p, False)
         (p.name, p.conn_id): transport_kernel_read_buffer_size(p, False)
         for p in connected_connections
         for p in connected_connections
-    },
-    labels=["name", "conn_id"],
-)
+    })
 
 
 
 
-metrics.register_callback(
-    "transport_kernel_read_buffer",
+tcp_transport_kernel_read_buffer = LaterGauge(
+    "synapse_replication_tcp_transport_kernel_read_buffer", "", ["name", "conn_id"],
     lambda: {
     lambda: {
         (p.name, p.conn_id): transport_kernel_read_buffer_size(p, True)
         (p.name, p.conn_id): transport_kernel_read_buffer_size(p, True)
         for p in connected_connections
         for p in connected_connections
-    },
-    labels=["name", "conn_id"],
-)
+    })
 
 
 
 
-metrics.register_callback(
-    "inbound_commands",
+tcp_inbound_commands = LaterGauge(
+    "synapse_replication_tcp_inbound_commands", "", ["command", "name", "conn_id"],
     lambda: {
     lambda: {
         (k[0], p.name, p.conn_id): count
         (k[0], p.name, p.conn_id): count
         for p in connected_connections
         for p in connected_connections
-        for k, count in p.inbound_commands_counter.counts.iteritems()
-    },
-    labels=["command", "name", "conn_id"],
-)
+        for k, count in iteritems(p.inbound_commands_counter)
+    })
 
 
-metrics.register_callback(
-    "outbound_commands",
+tcp_outbound_commands = LaterGauge(
+    "synapse_replication_tcp_outbound_commands", "", ["command", "name", "conn_id"],
     lambda: {
     lambda: {
         (k[0], p.name, p.conn_id): count
         (k[0], p.name, p.conn_id): count
         for p in connected_connections
         for p in connected_connections
-        for k, count in p.outbound_commands_counter.counts.iteritems()
-    },
-    labels=["command", "name", "conn_id"],
-)
+        for k, count in iteritems(p.outbound_commands_counter)
+    })
 
 
 # number of updates received for each RDATA stream
 # number of updates received for each RDATA stream
-inbound_rdata_count = metrics.register_counter(
-    "inbound_rdata_count",
-    labels=["stream_name"],
-)
+inbound_rdata_count = Counter("synapse_replication_tcp_inbound_rdata_count", "",
+                              ["stream_name"])

+ 19 - 18
synapse/replication/tcp/resource.py

@@ -22,20 +22,21 @@ from .streams import STREAMS_MAP, FederationStream
 from .protocol import ServerReplicationStreamProtocol
 from .protocol import ServerReplicationStreamProtocol
 
 
 from synapse.util.metrics import Measure, measure_func
 from synapse.util.metrics import Measure, measure_func
+from synapse.metrics import LaterGauge
 
 
 import logging
 import logging
-import synapse.metrics
 
 
+from prometheus_client import Counter
+from six import itervalues
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
-stream_updates_counter = metrics.register_counter(
-    "stream_updates", labels=["stream_name"]
-)
-user_sync_counter = metrics.register_counter("user_sync")
-federation_ack_counter = metrics.register_counter("federation_ack")
-remove_pusher_counter = metrics.register_counter("remove_pusher")
-invalidate_cache_counter = metrics.register_counter("invalidate_cache")
-user_ip_cache_counter = metrics.register_counter("user_ip_cache")
+stream_updates_counter = Counter("synapse_replication_tcp_resource_stream_updates",
+                                 "", ["stream_name"])
+user_sync_counter = Counter("synapse_replication_tcp_resource_user_sync", "")
+federation_ack_counter = Counter("synapse_replication_tcp_resource_federation_ack", "")
+remove_pusher_counter = Counter("synapse_replication_tcp_resource_remove_pusher", "")
+invalidate_cache_counter = Counter("synapse_replication_tcp_resource_invalidate_cache",
+                                   "")
+user_ip_cache_counter = Counter("synapse_replication_tcp_resource_user_ip_cache", "")
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
@@ -74,29 +75,29 @@ class ReplicationStreamer(object):
         # Current connections.
         # Current connections.
         self.connections = []
         self.connections = []
 
 
-        metrics.register_callback("total_connections", lambda: len(self.connections))
+        LaterGauge("synapse_replication_tcp_resource_total_connections", "", [],
+                   lambda: len(self.connections))
 
 
         # List of streams that clients can subscribe to.
         # List of streams that clients can subscribe to.
         # We only support federation stream if federation sending hase been
         # We only support federation stream if federation sending hase been
         # disabled on the master.
         # disabled on the master.
         self.streams = [
         self.streams = [
-            stream(hs) for stream in STREAMS_MAP.itervalues()
+            stream(hs) for stream in itervalues(STREAMS_MAP)
             if stream != FederationStream or not hs.config.send_federation
             if stream != FederationStream or not hs.config.send_federation
         ]
         ]
 
 
         self.streams_by_name = {stream.NAME: stream for stream in self.streams}
         self.streams_by_name = {stream.NAME: stream for stream in self.streams}
 
 
-        metrics.register_callback(
-            "connections_per_stream",
+        LaterGauge(
+            "synapse_replication_tcp_resource_connections_per_stream", "",
+            ["stream_name"],
             lambda: {
             lambda: {
                 (stream_name,): len([
                 (stream_name,): len([
                     conn for conn in self.connections
                     conn for conn in self.connections
                     if stream_name in conn.replication_streams
                     if stream_name in conn.replication_streams
                 ])
                 ])
                 for stream_name in self.streams_by_name
                 for stream_name in self.streams_by_name
-            },
-            labels=["stream_name"],
-        )
+            })
 
 
         self.federation_sender = None
         self.federation_sender = None
         if not hs.config.send_federation:
         if not hs.config.send_federation:
@@ -176,7 +177,7 @@ class ReplicationStreamer(object):
                             logger.info(
                             logger.info(
                                 "Streaming: %s -> %s", stream.NAME, updates[-1][0]
                                 "Streaming: %s -> %s", stream.NAME, updates[-1][0]
                             )
                             )
-                            stream_updates_counter.inc_by(len(updates), stream.NAME)
+                            stream_updates_counter.labels(stream.NAME).inc(len(updates))
 
 
                         # Some streams return multiple rows with the same stream IDs,
                         # Some streams return multiple rows with the same stream IDs,
                         # we need to make sure they get sent out in batches. We do
                         # we need to make sure they get sent out in batches. We do

+ 1 - 1
synapse/rest/client/transactions.py

@@ -104,7 +104,7 @@ class HttpTransactionCache(object):
 
 
     def _cleanup(self):
     def _cleanup(self):
         now = self.clock.time_msec()
         now = self.clock.time_msec()
-        for key in self.transactions.keys():
+        for key in list(self.transactions):
             ts = self.transactions[key][1]
             ts = self.transactions[key][1]
             if now > (ts + CLEANUP_PERIOD_MS):  # after cleanup period
             if now > (ts + CLEANUP_PERIOD_MS):  # after cleanup period
                 del self.transactions[key]
                 del self.transactions[key]

+ 5 - 3
synapse/rest/client/v1/presence.py

@@ -23,6 +23,8 @@ from synapse.handlers.presence import format_user_presence_state
 from synapse.http.servlet import parse_json_object_from_request
 from synapse.http.servlet import parse_json_object_from_request
 from .base import ClientV1RestServlet, client_path_patterns
 from .base import ClientV1RestServlet, client_path_patterns
 
 
+from six import string_types
+
 import logging
 import logging
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
@@ -71,7 +73,7 @@ class PresenceStatusRestServlet(ClientV1RestServlet):
 
 
             if "status_msg" in content:
             if "status_msg" in content:
                 state["status_msg"] = content.pop("status_msg")
                 state["status_msg"] = content.pop("status_msg")
-                if not isinstance(state["status_msg"], basestring):
+                if not isinstance(state["status_msg"], string_types):
                     raise SynapseError(400, "status_msg must be a string.")
                     raise SynapseError(400, "status_msg must be a string.")
 
 
             if content:
             if content:
@@ -129,7 +131,7 @@ class PresenceListRestServlet(ClientV1RestServlet):
 
 
         if "invite" in content:
         if "invite" in content:
             for u in content["invite"]:
             for u in content["invite"]:
-                if not isinstance(u, basestring):
+                if not isinstance(u, string_types):
                     raise SynapseError(400, "Bad invite value.")
                     raise SynapseError(400, "Bad invite value.")
                 if len(u) == 0:
                 if len(u) == 0:
                     continue
                     continue
@@ -140,7 +142,7 @@ class PresenceListRestServlet(ClientV1RestServlet):
 
 
         if "drop" in content:
         if "drop" in content:
             for u in content["drop"]:
             for u in content["drop"]:
-                if not isinstance(u, basestring):
+                if not isinstance(u, string_types):
                     raise SynapseError(400, "Bad drop value.")
                     raise SynapseError(400, "Bad drop value.")
                 if len(u) == 0:
                 if len(u) == 0:
                     continue
                     continue

+ 2 - 1
synapse/rest/media/v1/media_repository.py

@@ -48,6 +48,7 @@ import shutil
 import cgi
 import cgi
 import logging
 import logging
 from six.moves.urllib import parse as urlparse
 from six.moves.urllib import parse as urlparse
+from six import iteritems
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
@@ -603,7 +604,7 @@ class MediaRepository(object):
                 thumbnails[(t_width, t_height, r_type)] = r_method
                 thumbnails[(t_width, t_height, r_type)] = r_method
 
 
         # Now we generate the thumbnails for each dimension, store it
         # Now we generate the thumbnails for each dimension, store it
-        for (t_width, t_height, t_type), t_method in thumbnails.iteritems():
+        for (t_width, t_height, t_type), t_method in iteritems(thumbnails):
             # Generate the thumbnail
             # Generate the thumbnail
             if t_method == "crop":
             if t_method == "crop":
                 t_byte_source = yield make_deferred_yieldable(threads.deferToThread(
                 t_byte_source = yield make_deferred_yieldable(threads.deferToThread(

+ 5 - 3
synapse/rest/media/v1/preview_url_resource.py

@@ -24,7 +24,9 @@ import shutil
 import sys
 import sys
 import traceback
 import traceback
 import simplejson as json
 import simplejson as json
-import urlparse
+
+from six.moves import urllib_parse as urlparse
+from six import string_types
 
 
 from twisted.web.server import NOT_DONE_YET
 from twisted.web.server import NOT_DONE_YET
 from twisted.internet import defer
 from twisted.internet import defer
@@ -590,8 +592,8 @@ def _iterate_over_text(tree, *tags_to_ignore):
     # to be returned.
     # to be returned.
     elements = iter([tree])
     elements = iter([tree])
     while True:
     while True:
-        el = elements.next()
-        if isinstance(el, basestring):
+        el = next(elements)
+        if isinstance(el, string_types):
             yield el
             yield el
         elif el is not None and el.tag not in tags_to_ignore:
         elif el is not None and el.tag not in tags_to_ignore:
             # el.text is the text before the first child, so we can immediately
             # el.text is the text before the first child, so we can immediately

+ 5 - 0
synapse/server_notices/consent_server_notices.py

@@ -42,6 +42,7 @@ class ConsentServerNotices(object):
 
 
         self._current_consent_version = hs.config.user_consent_version
         self._current_consent_version = hs.config.user_consent_version
         self._server_notice_content = hs.config.user_consent_server_notice_content
         self._server_notice_content = hs.config.user_consent_server_notice_content
+        self._send_to_guests = hs.config.user_consent_server_notice_to_guests
 
 
         if self._server_notice_content is not None:
         if self._server_notice_content is not None:
             if not self._server_notices_manager.is_enabled():
             if not self._server_notices_manager.is_enabled():
@@ -78,6 +79,10 @@ class ConsentServerNotices(object):
         try:
         try:
             u = yield self._store.get_user_by_id(user_id)
             u = yield self._store.get_user_by_id(user_id)
 
 
+            if u["is_guest"] and not self._send_to_guests:
+                # don't send to guests
+                return
+
             if u["consent_version"] == self._current_consent_version:
             if u["consent_version"] == self._current_consent_version:
                 # user has already consented
                 # user has already consented
                 return
                 return

+ 27 - 24
synapse/state.py

@@ -32,6 +32,8 @@ from frozendict import frozendict
 import logging
 import logging
 import hashlib
 import hashlib
 
 
+from six import iteritems, itervalues
+
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -130,9 +132,10 @@ class StateHandler(object):
             defer.returnValue(event)
             defer.returnValue(event)
             return
             return
 
 
-        state_map = yield self.store.get_events(state.values(), get_prev_content=False)
+        state_map = yield self.store.get_events(list(state.values()),
+                                                get_prev_content=False)
         state = {
         state = {
-            key: state_map[e_id] for key, e_id in state.iteritems() if e_id in state_map
+            key: state_map[e_id] for key, e_id in iteritems(state) if e_id in state_map
         }
         }
 
 
         defer.returnValue(state)
         defer.returnValue(state)
@@ -338,7 +341,7 @@ class StateHandler(object):
         )
         )
 
 
         if len(state_groups_ids) == 1:
         if len(state_groups_ids) == 1:
-            name, state_list = state_groups_ids.items().pop()
+            name, state_list = list(state_groups_ids.items()).pop()
 
 
             prev_group, delta_ids = yield self.store.get_state_group_delta(name)
             prev_group, delta_ids = yield self.store.get_state_group_delta(name)
 
 
@@ -378,7 +381,7 @@ class StateHandler(object):
             new_state = resolve_events_with_state_map(state_set_ids, state_map)
             new_state = resolve_events_with_state_map(state_set_ids, state_map)
 
 
         new_state = {
         new_state = {
-            key: state_map[ev_id] for key, ev_id in new_state.iteritems()
+            key: state_map[ev_id] for key, ev_id in iteritems(new_state)
         }
         }
 
 
         return new_state
         return new_state
@@ -458,15 +461,15 @@ class StateResolutionHandler(object):
             # build a map from state key to the event_ids which set that state.
             # build a map from state key to the event_ids which set that state.
             # dict[(str, str), set[str])
             # dict[(str, str), set[str])
             state = {}
             state = {}
-            for st in state_groups_ids.itervalues():
-                for key, e_id in st.iteritems():
+            for st in itervalues(state_groups_ids):
+                for key, e_id in iteritems(st):
                     state.setdefault(key, set()).add(e_id)
                     state.setdefault(key, set()).add(e_id)
 
 
             # build a map from state key to the event_ids which set that state,
             # build a map from state key to the event_ids which set that state,
             # including only those where there are state keys in conflict.
             # including only those where there are state keys in conflict.
             conflicted_state = {
             conflicted_state = {
                 k: list(v)
                 k: list(v)
-                for k, v in state.iteritems()
+                for k, v in iteritems(state)
                 if len(v) > 1
                 if len(v) > 1
             }
             }
 
 
@@ -474,13 +477,13 @@ class StateResolutionHandler(object):
                 logger.info("Resolving conflicted state for %r", room_id)
                 logger.info("Resolving conflicted state for %r", room_id)
                 with Measure(self.clock, "state._resolve_events"):
                 with Measure(self.clock, "state._resolve_events"):
                     new_state = yield resolve_events_with_factory(
                     new_state = yield resolve_events_with_factory(
-                        state_groups_ids.values(),
+                        list(state_groups_ids.values()),
                         event_map=event_map,
                         event_map=event_map,
                         state_map_factory=state_map_factory,
                         state_map_factory=state_map_factory,
                     )
                     )
             else:
             else:
                 new_state = {
                 new_state = {
-                    key: e_ids.pop() for key, e_ids in state.iteritems()
+                    key: e_ids.pop() for key, e_ids in iteritems(state)
                 }
                 }
 
 
             with Measure(self.clock, "state.create_group_ids"):
             with Measure(self.clock, "state.create_group_ids"):
@@ -489,8 +492,8 @@ class StateResolutionHandler(object):
                 # which will be used as a cache key for future resolutions, but
                 # which will be used as a cache key for future resolutions, but
                 # not get persisted.
                 # not get persisted.
                 state_group = None
                 state_group = None
-                new_state_event_ids = frozenset(new_state.itervalues())
-                for sg, events in state_groups_ids.iteritems():
+                new_state_event_ids = frozenset(itervalues(new_state))
+                for sg, events in iteritems(state_groups_ids):
                     if new_state_event_ids == frozenset(e_id for e_id in events):
                     if new_state_event_ids == frozenset(e_id for e_id in events):
                         state_group = sg
                         state_group = sg
                         break
                         break
@@ -501,11 +504,11 @@ class StateResolutionHandler(object):
 
 
                 prev_group = None
                 prev_group = None
                 delta_ids = None
                 delta_ids = None
-                for old_group, old_ids in state_groups_ids.iteritems():
+                for old_group, old_ids in iteritems(state_groups_ids):
                     if not set(new_state) - set(old_ids):
                     if not set(new_state) - set(old_ids):
                         n_delta_ids = {
                         n_delta_ids = {
                             k: v
                             k: v
-                            for k, v in new_state.iteritems()
+                            for k, v in iteritems(new_state)
                             if old_ids.get(k) != v
                             if old_ids.get(k) != v
                         }
                         }
                         if not delta_ids or len(n_delta_ids) < len(delta_ids):
                         if not delta_ids or len(n_delta_ids) < len(delta_ids):
@@ -527,7 +530,7 @@ class StateResolutionHandler(object):
 
 
 def _ordered_events(events):
 def _ordered_events(events):
     def key_func(e):
     def key_func(e):
-        return -int(e.depth), hashlib.sha1(e.event_id).hexdigest()
+        return -int(e.depth), hashlib.sha1(e.event_id.encode()).hexdigest()
 
 
     return sorted(events, key=key_func)
     return sorted(events, key=key_func)
 
 
@@ -584,7 +587,7 @@ def _seperate(state_sets):
     conflicted_state = {}
     conflicted_state = {}
 
 
     for state_set in state_sets[1:]:
     for state_set in state_sets[1:]:
-        for key, value in state_set.iteritems():
+        for key, value in iteritems(state_set):
             # Check if there is an unconflicted entry for the state key.
             # Check if there is an unconflicted entry for the state key.
             unconflicted_value = unconflicted_state.get(key)
             unconflicted_value = unconflicted_state.get(key)
             if unconflicted_value is None:
             if unconflicted_value is None:
@@ -640,7 +643,7 @@ def resolve_events_with_factory(state_sets, event_map, state_map_factory):
 
 
     needed_events = set(
     needed_events = set(
         event_id
         event_id
-        for event_ids in conflicted_state.itervalues()
+        for event_ids in itervalues(conflicted_state)
         for event_id in event_ids
         for event_id in event_ids
     )
     )
     if event_map is not None:
     if event_map is not None:
@@ -662,7 +665,7 @@ def resolve_events_with_factory(state_sets, event_map, state_map_factory):
         unconflicted_state, conflicted_state, state_map
         unconflicted_state, conflicted_state, state_map
     )
     )
 
 
-    new_needed_events = set(auth_events.itervalues())
+    new_needed_events = set(itervalues(auth_events))
     new_needed_events -= needed_events
     new_needed_events -= needed_events
     if event_map is not None:
     if event_map is not None:
         new_needed_events -= set(event_map.iterkeys())
         new_needed_events -= set(event_map.iterkeys())
@@ -679,7 +682,7 @@ def resolve_events_with_factory(state_sets, event_map, state_map_factory):
 
 
 def _create_auth_events_from_maps(unconflicted_state, conflicted_state, state_map):
 def _create_auth_events_from_maps(unconflicted_state, conflicted_state, state_map):
     auth_events = {}
     auth_events = {}
-    for event_ids in conflicted_state.itervalues():
+    for event_ids in itervalues(conflicted_state):
         for event_id in event_ids:
         for event_id in event_ids:
             if event_id in state_map:
             if event_id in state_map:
                 keys = event_auth.auth_types_for_event(state_map[event_id])
                 keys = event_auth.auth_types_for_event(state_map[event_id])
@@ -694,7 +697,7 @@ def _create_auth_events_from_maps(unconflicted_state, conflicted_state, state_ma
 def _resolve_with_state(unconflicted_state_ids, conflicted_state_ds, auth_event_ids,
 def _resolve_with_state(unconflicted_state_ids, conflicted_state_ds, auth_event_ids,
                         state_map):
                         state_map):
     conflicted_state = {}
     conflicted_state = {}
-    for key, event_ids in conflicted_state_ds.iteritems():
+    for key, event_ids in iteritems(conflicted_state_ds):
         events = [state_map[ev_id] for ev_id in event_ids if ev_id in state_map]
         events = [state_map[ev_id] for ev_id in event_ids if ev_id in state_map]
         if len(events) > 1:
         if len(events) > 1:
             conflicted_state[key] = events
             conflicted_state[key] = events
@@ -703,7 +706,7 @@ def _resolve_with_state(unconflicted_state_ids, conflicted_state_ds, auth_event_
 
 
     auth_events = {
     auth_events = {
         key: state_map[ev_id]
         key: state_map[ev_id]
-        for key, ev_id in auth_event_ids.iteritems()
+        for key, ev_id in iteritems(auth_event_ids)
         if ev_id in state_map
         if ev_id in state_map
     }
     }
 
 
@@ -716,7 +719,7 @@ def _resolve_with_state(unconflicted_state_ids, conflicted_state_ds, auth_event_
         raise
         raise
 
 
     new_state = unconflicted_state_ids
     new_state = unconflicted_state_ids
-    for key, event in resolved_state.iteritems():
+    for key, event in iteritems(resolved_state):
         new_state[key] = event.event_id
         new_state[key] = event.event_id
 
 
     return new_state
     return new_state
@@ -741,7 +744,7 @@ def _resolve_state_events(conflicted_state, auth_events):
 
 
     auth_events.update(resolved_state)
     auth_events.update(resolved_state)
 
 
-    for key, events in conflicted_state.iteritems():
+    for key, events in iteritems(conflicted_state):
         if key[0] == EventTypes.JoinRules:
         if key[0] == EventTypes.JoinRules:
             logger.debug("Resolving conflicted join rules %r", events)
             logger.debug("Resolving conflicted join rules %r", events)
             resolved_state[key] = _resolve_auth_events(
             resolved_state[key] = _resolve_auth_events(
@@ -751,7 +754,7 @@ def _resolve_state_events(conflicted_state, auth_events):
 
 
     auth_events.update(resolved_state)
     auth_events.update(resolved_state)
 
 
-    for key, events in conflicted_state.iteritems():
+    for key, events in iteritems(conflicted_state):
         if key[0] == EventTypes.Member:
         if key[0] == EventTypes.Member:
             logger.debug("Resolving conflicted member lists %r", events)
             logger.debug("Resolving conflicted member lists %r", events)
             resolved_state[key] = _resolve_auth_events(
             resolved_state[key] = _resolve_auth_events(
@@ -761,7 +764,7 @@ def _resolve_state_events(conflicted_state, auth_events):
 
 
     auth_events.update(resolved_state)
     auth_events.update(resolved_state)
 
 
-    for key, events in conflicted_state.iteritems():
+    for key, events in iteritems(conflicted_state):
         if key not in resolved_state:
         if key not in resolved_state:
             logger.debug("Resolving conflicted state %r:%r", key, events)
             logger.debug("Resolving conflicted state %r:%r", key, events)
             resolved_state[key] = _resolve_normal_events(
             resolved_state[key] = _resolve_normal_events(

+ 46 - 38
synapse/storage/_base.py

@@ -18,8 +18,8 @@ from synapse.api.errors import StoreError
 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
 from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
 from synapse.util.caches.descriptors import Cache
 from synapse.util.caches.descriptors import Cache
 from synapse.storage.engines import PostgresEngine
 from synapse.storage.engines import PostgresEngine
-import synapse.metrics
 
 
+from prometheus_client import Histogram
 
 
 from twisted.internet import defer
 from twisted.internet import defer
 
 
@@ -27,20 +27,25 @@ import sys
 import time
 import time
 import threading
 import threading
 
 
+from six import itervalues, iterkeys, iteritems
+from six.moves import intern, range
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
+try:
+    MAX_TXN_ID = sys.maxint - 1
+except AttributeError:
+    # python 3 does not have a maximum int value
+    MAX_TXN_ID = 2**63 - 1
+
 sql_logger = logging.getLogger("synapse.storage.SQL")
 sql_logger = logging.getLogger("synapse.storage.SQL")
 transaction_logger = logging.getLogger("synapse.storage.txn")
 transaction_logger = logging.getLogger("synapse.storage.txn")
 perf_logger = logging.getLogger("synapse.storage.TIME")
 perf_logger = logging.getLogger("synapse.storage.TIME")
 
 
+sql_scheduling_timer = Histogram("synapse_storage_schedule_time", "sec")
 
 
-metrics = synapse.metrics.get_metrics_for("synapse.storage")
-
-sql_scheduling_timer = metrics.register_distribution("schedule_time")
-
-sql_query_timer = metrics.register_distribution("query_time", labels=["verb"])
-sql_txn_timer = metrics.register_distribution("transaction_time", labels=["desc"])
+sql_query_timer = Histogram("synapse_storage_query_time", "sec", ["verb"])
+sql_txn_timer = Histogram("synapse_storage_transaction_time", "sec", ["desc"])
 
 
 
 
 class LoggingTransaction(object):
 class LoggingTransaction(object):
@@ -105,7 +110,7 @@ class LoggingTransaction(object):
                 # Don't let logging failures stop SQL from working
                 # Don't let logging failures stop SQL from working
                 pass
                 pass
 
 
-        start = time.time() * 1000
+        start = time.time()
 
 
         try:
         try:
             return func(
             return func(
@@ -115,9 +120,9 @@ class LoggingTransaction(object):
             logger.debug("[SQL FAIL] {%s} %s", self.name, e)
             logger.debug("[SQL FAIL] {%s} %s", self.name, e)
             raise
             raise
         finally:
         finally:
-            msecs = (time.time() * 1000) - start
-            sql_logger.debug("[SQL time] {%s} %f", self.name, msecs)
-            sql_query_timer.inc_by(msecs, sql.split()[0])
+            secs = time.time() - start
+            sql_logger.debug("[SQL time] {%s} %f sec", self.name, secs)
+            sql_query_timer.labels(sql.split()[0]).observe(secs)
 
 
 
 
 class PerformanceCounters(object):
 class PerformanceCounters(object):
@@ -127,7 +132,7 @@ class PerformanceCounters(object):
 
 
     def update(self, key, start_time, end_time=None):
     def update(self, key, start_time, end_time=None):
         if end_time is None:
         if end_time is None:
-            end_time = time.time() * 1000
+            end_time = time.time()
         duration = end_time - start_time
         duration = end_time - start_time
         count, cum_time = self.current_counters.get(key, (0, 0))
         count, cum_time = self.current_counters.get(key, (0, 0))
         count += 1
         count += 1
@@ -137,7 +142,7 @@ class PerformanceCounters(object):
 
 
     def interval(self, interval_duration, limit=3):
     def interval(self, interval_duration, limit=3):
         counters = []
         counters = []
-        for name, (count, cum_time) in self.current_counters.iteritems():
+        for name, (count, cum_time) in iteritems(self.current_counters):
             prev_count, prev_time = self.previous_counters.get(name, (0, 0))
             prev_count, prev_time = self.previous_counters.get(name, (0, 0))
             counters.append((
             counters.append((
                 (cum_time - prev_time) / interval_duration,
                 (cum_time - prev_time) / interval_duration,
@@ -217,12 +222,12 @@ class SQLBaseStore(object):
 
 
     def _new_transaction(self, conn, desc, after_callbacks, exception_callbacks,
     def _new_transaction(self, conn, desc, after_callbacks, exception_callbacks,
                          logging_context, func, *args, **kwargs):
                          logging_context, func, *args, **kwargs):
-        start = time.time() * 1000
+        start = time.time()
         txn_id = self._TXN_ID
         txn_id = self._TXN_ID
 
 
         # We don't really need these to be unique, so lets stop it from
         # We don't really need these to be unique, so lets stop it from
         # growing really large.
         # growing really large.
-        self._TXN_ID = (self._TXN_ID + 1) % (sys.maxint - 1)
+        self._TXN_ID = (self._TXN_ID + 1) % (MAX_TXN_ID)
 
 
         name = "%s-%x" % (desc, txn_id, )
         name = "%s-%x" % (desc, txn_id, )
 
 
@@ -277,17 +282,17 @@ class SQLBaseStore(object):
             logger.debug("[TXN FAIL] {%s} %s", name, e)
             logger.debug("[TXN FAIL] {%s} %s", name, e)
             raise
             raise
         finally:
         finally:
-            end = time.time() * 1000
+            end = time.time()
             duration = end - start
             duration = end - start
 
 
             if logging_context is not None:
             if logging_context is not None:
                 logging_context.add_database_transaction(duration)
                 logging_context.add_database_transaction(duration)
 
 
-            transaction_logger.debug("[TXN END] {%s} %f", name, duration)
+            transaction_logger.debug("[TXN END] {%s} %f sec", name, duration)
 
 
             self._current_txn_total_time += duration
             self._current_txn_total_time += duration
             self._txn_perf_counters.update(desc, start, end)
             self._txn_perf_counters.update(desc, start, end)
-            sql_txn_timer.inc_by(duration, desc)
+            sql_txn_timer.labels(desc).observe(duration)
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
     def runInteraction(self, desc, func, *args, **kwargs):
     def runInteraction(self, desc, func, *args, **kwargs):
@@ -344,13 +349,13 @@ class SQLBaseStore(object):
         """
         """
         current_context = LoggingContext.current_context()
         current_context = LoggingContext.current_context()
 
 
-        start_time = time.time() * 1000
+        start_time = time.time()
 
 
         def inner_func(conn, *args, **kwargs):
         def inner_func(conn, *args, **kwargs):
             with LoggingContext("runWithConnection") as context:
             with LoggingContext("runWithConnection") as context:
-                sched_duration_ms = time.time() * 1000 - start_time
-                sql_scheduling_timer.inc_by(sched_duration_ms)
-                current_context.add_database_scheduled(sched_duration_ms)
+                sched_duration_sec = time.time() - start_time
+                sql_scheduling_timer.observe(sched_duration_sec)
+                current_context.add_database_scheduled(sched_duration_sec)
 
 
                 if self.database_engine.is_connection_closed(conn):
                 if self.database_engine.is_connection_closed(conn):
                     logger.debug("Reconnecting closed database connection")
                     logger.debug("Reconnecting closed database connection")
@@ -543,7 +548,7 @@ class SQLBaseStore(object):
             ", ".join("%s = ?" % (k,) for k in values),
             ", ".join("%s = ?" % (k,) for k in values),
             " AND ".join("%s = ?" % (k,) for k in keyvalues)
             " AND ".join("%s = ?" % (k,) for k in keyvalues)
         )
         )
-        sqlargs = values.values() + keyvalues.values()
+        sqlargs = list(values.values()) + list(keyvalues.values())
 
 
         txn.execute(sql, sqlargs)
         txn.execute(sql, sqlargs)
         if txn.rowcount > 0:
         if txn.rowcount > 0:
@@ -561,7 +566,7 @@ class SQLBaseStore(object):
             ", ".join(k for k in allvalues),
             ", ".join(k for k in allvalues),
             ", ".join("?" for _ in allvalues)
             ", ".join("?" for _ in allvalues)
         )
         )
-        txn.execute(sql, allvalues.values())
+        txn.execute(sql, list(allvalues.values()))
         # successfully inserted
         # successfully inserted
         return True
         return True
 
 
@@ -629,8 +634,8 @@ class SQLBaseStore(object):
         }
         }
 
 
         if keyvalues:
         if keyvalues:
-            sql += " WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.iterkeys())
-            txn.execute(sql, keyvalues.values())
+            sql += " WHERE %s" % " AND ".join("%s = ?" % k for k in iterkeys(keyvalues))
+            txn.execute(sql, list(keyvalues.values()))
         else:
         else:
             txn.execute(sql)
             txn.execute(sql)
 
 
@@ -694,7 +699,7 @@ class SQLBaseStore(object):
                 table,
                 table,
                 " AND ".join("%s = ?" % (k, ) for k in keyvalues)
                 " AND ".join("%s = ?" % (k, ) for k in keyvalues)
             )
             )
-            txn.execute(sql, keyvalues.values())
+            txn.execute(sql, list(keyvalues.values()))
         else:
         else:
             sql = "SELECT %s FROM %s" % (
             sql = "SELECT %s FROM %s" % (
                 ", ".join(retcols),
                 ", ".join(retcols),
@@ -725,9 +730,12 @@ class SQLBaseStore(object):
         if not iterable:
         if not iterable:
             defer.returnValue(results)
             defer.returnValue(results)
 
 
+        # iterables can not be sliced, so convert it to a list first
+        it_list = list(iterable)
+
         chunks = [
         chunks = [
-            iterable[i:i + batch_size]
-            for i in xrange(0, len(iterable), batch_size)
+            it_list[i:i + batch_size]
+            for i in range(0, len(it_list), batch_size)
         ]
         ]
         for chunk in chunks:
         for chunk in chunks:
             rows = yield self.runInteraction(
             rows = yield self.runInteraction(
@@ -767,7 +775,7 @@ class SQLBaseStore(object):
         )
         )
         values.extend(iterable)
         values.extend(iterable)
 
 
-        for key, value in keyvalues.iteritems():
+        for key, value in iteritems(keyvalues):
             clauses.append("%s = ?" % (key,))
             clauses.append("%s = ?" % (key,))
             values.append(value)
             values.append(value)
 
 
@@ -790,7 +798,7 @@ class SQLBaseStore(object):
     @staticmethod
     @staticmethod
     def _simple_update_txn(txn, table, keyvalues, updatevalues):
     def _simple_update_txn(txn, table, keyvalues, updatevalues):
         if keyvalues:
         if keyvalues:
-            where = "WHERE %s" % " AND ".join("%s = ?" % k for k in keyvalues.iterkeys())
+            where = "WHERE %s" % " AND ".join("%s = ?" % k for k in iterkeys(keyvalues))
         else:
         else:
             where = ""
             where = ""
 
 
@@ -802,7 +810,7 @@ class SQLBaseStore(object):
 
 
         txn.execute(
         txn.execute(
             update_sql,
             update_sql,
-            updatevalues.values() + keyvalues.values()
+            list(updatevalues.values()) + list(keyvalues.values())
         )
         )
 
 
         return txn.rowcount
         return txn.rowcount
@@ -850,7 +858,7 @@ class SQLBaseStore(object):
             " AND ".join("%s = ?" % (k,) for k in keyvalues)
             " AND ".join("%s = ?" % (k,) for k in keyvalues)
         )
         )
 
 
-        txn.execute(select_sql, keyvalues.values())
+        txn.execute(select_sql, list(keyvalues.values()))
 
 
         row = txn.fetchone()
         row = txn.fetchone()
         if not row:
         if not row:
@@ -888,7 +896,7 @@ class SQLBaseStore(object):
             " AND ".join("%s = ?" % (k, ) for k in keyvalues)
             " AND ".join("%s = ?" % (k, ) for k in keyvalues)
         )
         )
 
 
-        txn.execute(sql, keyvalues.values())
+        txn.execute(sql, list(keyvalues.values()))
         if txn.rowcount == 0:
         if txn.rowcount == 0:
             raise StoreError(404, "No row found")
             raise StoreError(404, "No row found")
         if txn.rowcount > 1:
         if txn.rowcount > 1:
@@ -906,7 +914,7 @@ class SQLBaseStore(object):
             " AND ".join("%s = ?" % (k, ) for k in keyvalues)
             " AND ".join("%s = ?" % (k, ) for k in keyvalues)
         )
         )
 
 
-        return txn.execute(sql, keyvalues.values())
+        return txn.execute(sql, list(keyvalues.values()))
 
 
     def _simple_delete_many(self, table, column, iterable, keyvalues, desc):
     def _simple_delete_many(self, table, column, iterable, keyvalues, desc):
         return self.runInteraction(
         return self.runInteraction(
@@ -938,7 +946,7 @@ class SQLBaseStore(object):
         )
         )
         values.extend(iterable)
         values.extend(iterable)
 
 
-        for key, value in keyvalues.iteritems():
+        for key, value in iteritems(keyvalues):
             clauses.append("%s = ?" % (key,))
             clauses.append("%s = ?" % (key,))
             values.append(value)
             values.append(value)
 
 
@@ -978,7 +986,7 @@ class SQLBaseStore(object):
         txn.close()
         txn.close()
 
 
         if cache:
         if cache:
-            min_val = min(cache.itervalues())
+            min_val = min(itervalues(cache))
         else:
         else:
             min_val = max_value
             min_val = max_value
 
 
@@ -1093,7 +1101,7 @@ class SQLBaseStore(object):
                 " AND ".join("%s = ?" % (k,) for k in keyvalues),
                 " AND ".join("%s = ?" % (k,) for k in keyvalues),
                 " ? ASC LIMIT ? OFFSET ?"
                 " ? ASC LIMIT ? OFFSET ?"
             )
             )
-            txn.execute(sql, keyvalues.values() + pagevalues)
+            txn.execute(sql, list(keyvalues.values()) + list(pagevalues))
         else:
         else:
             sql = "SELECT %s FROM %s ORDER BY %s" % (
             sql = "SELECT %s FROM %s ORDER BY %s" % (
                 ", ".join(retcols),
                 ", ".join(retcols),

+ 4 - 2
synapse/storage/client_ips.py

@@ -22,6 +22,8 @@ from . import background_updates
 
 
 from synapse.util.caches import CACHE_SIZE_FACTOR
 from synapse.util.caches import CACHE_SIZE_FACTOR
 
 
+from six import iteritems
+
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
@@ -99,7 +101,7 @@ class ClientIpStore(background_updates.BackgroundUpdateStore):
     def _update_client_ips_batch_txn(self, txn, to_update):
     def _update_client_ips_batch_txn(self, txn, to_update):
         self.database_engine.lock_table(txn, "user_ips")
         self.database_engine.lock_table(txn, "user_ips")
 
 
-        for entry in to_update.iteritems():
+        for entry in iteritems(to_update):
             (user_id, access_token, ip), (user_agent, device_id, last_seen) = entry
             (user_id, access_token, ip), (user_agent, device_id, last_seen) = entry
 
 
             self._simple_upsert_txn(
             self._simple_upsert_txn(
@@ -231,5 +233,5 @@ class ClientIpStore(background_updates.BackgroundUpdateStore):
                 "user_agent": user_agent,
                 "user_agent": user_agent,
                 "last_seen": last_seen,
                 "last_seen": last_seen,
             }
             }
-            for (access_token, ip), (user_agent, last_seen) in results.iteritems()
+            for (access_token, ip), (user_agent, last_seen) in iteritems(results)
         ))
         ))

+ 5 - 4
synapse/storage/devices.py

@@ -21,6 +21,7 @@ from synapse.api.errors import StoreError
 from ._base import SQLBaseStore, Cache
 from ._base import SQLBaseStore, Cache
 from synapse.util.caches.descriptors import cached, cachedList, cachedInlineCallbacks
 from synapse.util.caches.descriptors import cached, cachedList, cachedInlineCallbacks
 
 
+from six import itervalues, iteritems
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
@@ -360,7 +361,7 @@ class DeviceStore(SQLBaseStore):
             return (now_stream_id, [])
             return (now_stream_id, [])
 
 
         if len(query_map) >= 20:
         if len(query_map) >= 20:
-            now_stream_id = max(stream_id for stream_id in query_map.itervalues())
+            now_stream_id = max(stream_id for stream_id in itervalues(query_map))
 
 
         devices = self._get_e2e_device_keys_txn(
         devices = self._get_e2e_device_keys_txn(
             txn, query_map.keys(), include_all_devices=True
             txn, query_map.keys(), include_all_devices=True
@@ -373,13 +374,13 @@ class DeviceStore(SQLBaseStore):
         """
         """
 
 
         results = []
         results = []
-        for user_id, user_devices in devices.iteritems():
+        for user_id, user_devices in iteritems(devices):
             # The prev_id for the first row is always the last row before
             # The prev_id for the first row is always the last row before
             # `from_stream_id`
             # `from_stream_id`
             txn.execute(prev_sent_id_sql, (destination, user_id, from_stream_id))
             txn.execute(prev_sent_id_sql, (destination, user_id, from_stream_id))
             rows = txn.fetchall()
             rows = txn.fetchall()
             prev_id = rows[0][0]
             prev_id = rows[0][0]
-            for device_id, device in user_devices.iteritems():
+            for device_id, device in iteritems(user_devices):
                 stream_id = query_map[(user_id, device_id)]
                 stream_id = query_map[(user_id, device_id)]
                 result = {
                 result = {
                     "user_id": user_id,
                     "user_id": user_id,
@@ -483,7 +484,7 @@ class DeviceStore(SQLBaseStore):
         if devices:
         if devices:
             user_devices = devices[user_id]
             user_devices = devices[user_id]
             results = []
             results = []
-            for device_id, device in user_devices.iteritems():
+            for device_id, device in iteritems(user_devices):
                 result = {
                 result = {
                     "device_id": device_id,
                     "device_id": device_id,
                 }
                 }

+ 4 - 2
synapse/storage/end_to_end_keys.py

@@ -21,6 +21,8 @@ import simplejson as json
 
 
 from ._base import SQLBaseStore
 from ._base import SQLBaseStore
 
 
+from six import iteritems
+
 
 
 class EndToEndKeyStore(SQLBaseStore):
 class EndToEndKeyStore(SQLBaseStore):
     def set_e2e_device_keys(self, user_id, device_id, time_now, device_keys):
     def set_e2e_device_keys(self, user_id, device_id, time_now, device_keys):
@@ -81,8 +83,8 @@ class EndToEndKeyStore(SQLBaseStore):
             query_list, include_all_devices,
             query_list, include_all_devices,
         )
         )
 
 
-        for user_id, device_keys in results.iteritems():
-            for device_id, device_info in device_keys.iteritems():
+        for user_id, device_keys in iteritems(results):
+            for device_id, device_info in iteritems(device_keys):
                 device_info["keys"] = json.loads(device_info.pop("key_json"))
                 device_info["keys"] = json.loads(device_info.pop("key_json"))
 
 
         defer.returnValue(results)
         defer.returnValue(results)

+ 3 - 1
synapse/storage/event_push_actions.py

@@ -22,6 +22,8 @@ from synapse.util.caches.descriptors import cachedInlineCallbacks
 import logging
 import logging
 import simplejson as json
 import simplejson as json
 
 
+from six import iteritems
+
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -420,7 +422,7 @@ class EventPushActionsWorkerStore(SQLBaseStore):
 
 
             txn.executemany(sql, (
             txn.executemany(sql, (
                 _gen_entry(user_id, actions)
                 _gen_entry(user_id, actions)
-                for user_id, actions in user_id_actions.iteritems()
+                for user_id, actions in iteritems(user_id_actions)
             ))
             ))
 
 
         return self.runInteraction(
         return self.runInteraction(

+ 39 - 39
synapse/storage/events.py

@@ -40,30 +40,30 @@ import synapse.metrics
 from synapse.events import EventBase    # noqa: F401
 from synapse.events import EventBase    # noqa: F401
 from synapse.events.snapshot import EventContext   # noqa: F401
 from synapse.events.snapshot import EventContext   # noqa: F401
 
 
-logger = logging.getLogger(__name__)
+from six.moves import range
+from six import itervalues, iteritems
 
 
+from prometheus_client import Counter
 
 
-metrics = synapse.metrics.get_metrics_for(__name__)
-persist_event_counter = metrics.register_counter("persisted_events")
-event_counter = metrics.register_counter(
-    "persisted_events_sep", labels=["type", "origin_type", "origin_entity"]
-)
+logger = logging.getLogger(__name__)
+
+persist_event_counter = Counter("synapse_storage_events_persisted_events", "")
+event_counter = Counter("synapse_storage_events_persisted_events_sep", "",
+                        ["type", "origin_type", "origin_entity"])
 
 
 # The number of times we are recalculating the current state
 # The number of times we are recalculating the current state
-state_delta_counter = metrics.register_counter(
-    "state_delta",
-)
+state_delta_counter = Counter("synapse_storage_events_state_delta", "")
+
 # The number of times we are recalculating state when there is only a
 # The number of times we are recalculating state when there is only a
 # single forward extremity
 # single forward extremity
-state_delta_single_event_counter = metrics.register_counter(
-    "state_delta_single_event",
-)
+state_delta_single_event_counter = Counter(
+    "synapse_storage_events_state_delta_single_event", "")
+
 # The number of times we are reculating state when we could have resonably
 # The number of times we are reculating state when we could have resonably
 # calculated the delta when we calculated the state for an event we were
 # calculated the delta when we calculated the state for an event we were
 # persisting.
 # persisting.
-state_delta_reuse_delta_counter = metrics.register_counter(
-    "state_delta_reuse_delta",
-)
+state_delta_reuse_delta_counter = Counter(
+    "synapse_storage_events_state_delta_reuse_delta", "")
 
 
 
 
 def encode_json(json_object):
 def encode_json(json_object):
@@ -248,7 +248,7 @@ class EventsStore(EventsWorkerStore):
             partitioned.setdefault(event.room_id, []).append((event, ctx))
             partitioned.setdefault(event.room_id, []).append((event, ctx))
 
 
         deferreds = []
         deferreds = []
-        for room_id, evs_ctxs in partitioned.iteritems():
+        for room_id, evs_ctxs in iteritems(partitioned):
             d = self._event_persist_queue.add_to_queue(
             d = self._event_persist_queue.add_to_queue(
                 room_id, evs_ctxs,
                 room_id, evs_ctxs,
                 backfilled=backfilled,
                 backfilled=backfilled,
@@ -333,7 +333,7 @@ class EventsStore(EventsWorkerStore):
 
 
             chunks = [
             chunks = [
                 events_and_contexts[x:x + 100]
                 events_and_contexts[x:x + 100]
-                for x in xrange(0, len(events_and_contexts), 100)
+                for x in range(0, len(events_and_contexts), 100)
             ]
             ]
 
 
             for chunk in chunks:
             for chunk in chunks:
@@ -367,7 +367,7 @@ class EventsStore(EventsWorkerStore):
                                 (event, context)
                                 (event, context)
                             )
                             )
 
 
-                        for room_id, ev_ctx_rm in events_by_room.iteritems():
+                        for room_id, ev_ctx_rm in iteritems(events_by_room):
                             # Work out new extremities by recursively adding and removing
                             # Work out new extremities by recursively adding and removing
                             # the new events.
                             # the new events.
                             latest_event_ids = yield self.get_latest_event_ids_in_room(
                             latest_event_ids = yield self.get_latest_event_ids_in_room(
@@ -445,7 +445,7 @@ class EventsStore(EventsWorkerStore):
                     state_delta_for_room=state_delta_for_room,
                     state_delta_for_room=state_delta_for_room,
                     new_forward_extremeties=new_forward_extremeties,
                     new_forward_extremeties=new_forward_extremeties,
                 )
                 )
-                persist_event_counter.inc_by(len(chunk))
+                persist_event_counter.inc(len(chunk))
                 synapse.metrics.event_persisted_position.set(
                 synapse.metrics.event_persisted_position.set(
                     chunk[-1][0].internal_metadata.stream_ordering,
                     chunk[-1][0].internal_metadata.stream_ordering,
                 )
                 )
@@ -460,14 +460,14 @@ class EventsStore(EventsWorkerStore):
                         origin_type = "remote"
                         origin_type = "remote"
                         origin_entity = get_domain_from_id(event.sender)
                         origin_entity = get_domain_from_id(event.sender)
 
 
-                    event_counter.inc(event.type, origin_type, origin_entity)
+                    event_counter.labels(event.type, origin_type, origin_entity).inc()
 
 
-                for room_id, new_state in current_state_for_room.iteritems():
+                for room_id, new_state in iteritems(current_state_for_room):
                     self.get_current_state_ids.prefill(
                     self.get_current_state_ids.prefill(
                         (room_id, ), new_state
                         (room_id, ), new_state
                     )
                     )
 
 
-                for room_id, latest_event_ids in new_forward_extremeties.iteritems():
+                for room_id, latest_event_ids in iteritems(new_forward_extremeties):
                     self.get_latest_event_ids_in_room.prefill(
                     self.get_latest_event_ids_in_room.prefill(
                         (room_id,), list(latest_event_ids)
                         (room_id,), list(latest_event_ids)
                     )
                     )
@@ -644,20 +644,20 @@ class EventsStore(EventsWorkerStore):
         """
         """
         existing_state = yield self.get_current_state_ids(room_id)
         existing_state = yield self.get_current_state_ids(room_id)
 
 
-        existing_events = set(existing_state.itervalues())
-        new_events = set(ev_id for ev_id in current_state.itervalues())
+        existing_events = set(itervalues(existing_state))
+        new_events = set(ev_id for ev_id in itervalues(current_state))
         changed_events = existing_events ^ new_events
         changed_events = existing_events ^ new_events
 
 
         if not changed_events:
         if not changed_events:
             return
             return
 
 
         to_delete = {
         to_delete = {
-            key: ev_id for key, ev_id in existing_state.iteritems()
+            key: ev_id for key, ev_id in iteritems(existing_state)
             if ev_id in changed_events
             if ev_id in changed_events
         }
         }
         events_to_insert = (new_events - existing_events)
         events_to_insert = (new_events - existing_events)
         to_insert = {
         to_insert = {
-            key: ev_id for key, ev_id in current_state.iteritems()
+            key: ev_id for key, ev_id in iteritems(current_state)
             if ev_id in events_to_insert
             if ev_id in events_to_insert
         }
         }
 
 
@@ -760,11 +760,11 @@ class EventsStore(EventsWorkerStore):
         )
         )
 
 
     def _update_current_state_txn(self, txn, state_delta_by_room, max_stream_order):
     def _update_current_state_txn(self, txn, state_delta_by_room, max_stream_order):
-        for room_id, current_state_tuple in state_delta_by_room.iteritems():
+        for room_id, current_state_tuple in iteritems(state_delta_by_room):
                 to_delete, to_insert = current_state_tuple
                 to_delete, to_insert = current_state_tuple
                 txn.executemany(
                 txn.executemany(
                     "DELETE FROM current_state_events WHERE event_id = ?",
                     "DELETE FROM current_state_events WHERE event_id = ?",
-                    [(ev_id,) for ev_id in to_delete.itervalues()],
+                    [(ev_id,) for ev_id in itervalues(to_delete)],
                 )
                 )
 
 
                 self._simple_insert_many_txn(
                 self._simple_insert_many_txn(
@@ -777,7 +777,7 @@ class EventsStore(EventsWorkerStore):
                             "type": key[0],
                             "type": key[0],
                             "state_key": key[1],
                             "state_key": key[1],
                         }
                         }
-                        for key, ev_id in to_insert.iteritems()
+                        for key, ev_id in iteritems(to_insert)
                     ],
                     ],
                 )
                 )
 
 
@@ -796,7 +796,7 @@ class EventsStore(EventsWorkerStore):
                             "event_id": ev_id,
                             "event_id": ev_id,
                             "prev_event_id": to_delete.get(key, None),
                             "prev_event_id": to_delete.get(key, None),
                         }
                         }
-                        for key, ev_id in state_deltas.iteritems()
+                        for key, ev_id in iteritems(state_deltas)
                     ]
                     ]
                 )
                 )
 
 
@@ -839,7 +839,7 @@ class EventsStore(EventsWorkerStore):
 
 
     def _update_forward_extremities_txn(self, txn, new_forward_extremities,
     def _update_forward_extremities_txn(self, txn, new_forward_extremities,
                                         max_stream_order):
                                         max_stream_order):
-        for room_id, new_extrem in new_forward_extremities.iteritems():
+        for room_id, new_extrem in iteritems(new_forward_extremities):
             self._simple_delete_txn(
             self._simple_delete_txn(
                 txn,
                 txn,
                 table="event_forward_extremities",
                 table="event_forward_extremities",
@@ -857,7 +857,7 @@ class EventsStore(EventsWorkerStore):
                     "event_id": ev_id,
                     "event_id": ev_id,
                     "room_id": room_id,
                     "room_id": room_id,
                 }
                 }
-                for room_id, new_extrem in new_forward_extremities.iteritems()
+                for room_id, new_extrem in iteritems(new_forward_extremities)
                 for ev_id in new_extrem
                 for ev_id in new_extrem
             ],
             ],
         )
         )
@@ -874,7 +874,7 @@ class EventsStore(EventsWorkerStore):
                     "event_id": event_id,
                     "event_id": event_id,
                     "stream_ordering": max_stream_order,
                     "stream_ordering": max_stream_order,
                 }
                 }
-                for room_id, new_extrem in new_forward_extremities.iteritems()
+                for room_id, new_extrem in iteritems(new_forward_extremities)
                 for event_id in new_extrem
                 for event_id in new_extrem
             ]
             ]
         )
         )
@@ -902,7 +902,7 @@ class EventsStore(EventsWorkerStore):
                         new_events_and_contexts[event.event_id] = (event, context)
                         new_events_and_contexts[event.event_id] = (event, context)
             else:
             else:
                 new_events_and_contexts[event.event_id] = (event, context)
                 new_events_and_contexts[event.event_id] = (event, context)
-        return new_events_and_contexts.values()
+        return list(new_events_and_contexts.values())
 
 
     def _update_room_depths_txn(self, txn, events_and_contexts, backfilled):
     def _update_room_depths_txn(self, txn, events_and_contexts, backfilled):
         """Update min_depth for each room
         """Update min_depth for each room
@@ -928,7 +928,7 @@ class EventsStore(EventsWorkerStore):
                     event.depth, depth_updates.get(event.room_id, event.depth)
                     event.depth, depth_updates.get(event.room_id, event.depth)
                 )
                 )
 
 
-        for room_id, depth in depth_updates.iteritems():
+        for room_id, depth in iteritems(depth_updates):
             self._update_min_depth_for_room_txn(txn, room_id, depth)
             self._update_min_depth_for_room_txn(txn, room_id, depth)
 
 
     def _update_outliers_txn(self, txn, events_and_contexts):
     def _update_outliers_txn(self, txn, events_and_contexts):
@@ -1312,7 +1312,7 @@ class EventsStore(EventsWorkerStore):
                 " WHERE e.event_id IN (%s)"
                 " WHERE e.event_id IN (%s)"
             ) % (",".join(["?"] * len(ev_map)),)
             ) % (",".join(["?"] * len(ev_map)),)
 
 
-            txn.execute(sql, ev_map.keys())
+            txn.execute(sql, list(ev_map))
             rows = self.cursor_to_dict(txn)
             rows = self.cursor_to_dict(txn)
             for row in rows:
             for row in rows:
                 event = ev_map[row["event_id"]]
                 event = ev_map[row["event_id"]]
@@ -1575,7 +1575,7 @@ class EventsStore(EventsWorkerStore):
 
 
             chunks = [
             chunks = [
                 event_ids[i:i + 100]
                 event_ids[i:i + 100]
-                for i in xrange(0, len(event_ids), 100)
+                for i in range(0, len(event_ids), 100)
             ]
             ]
             for chunk in chunks:
             for chunk in chunks:
                 ev_rows = self._simple_select_many_txn(
                 ev_rows = self._simple_select_many_txn(
@@ -1989,7 +1989,7 @@ class EventsStore(EventsWorkerStore):
         logger.info("[purge] finding state groups which depend on redundant"
         logger.info("[purge] finding state groups which depend on redundant"
                     " state groups")
                     " state groups")
         remaining_state_groups = []
         remaining_state_groups = []
-        for i in xrange(0, len(state_rows), 100):
+        for i in range(0, len(state_rows), 100):
             chunk = [sg for sg, in state_rows[i:i + 100]]
             chunk = [sg for sg, in state_rows[i:i + 100]]
             # look for state groups whose prev_state_group is one we are about
             # look for state groups whose prev_state_group is one we are about
             # to delete
             # to delete
@@ -2045,7 +2045,7 @@ class EventsStore(EventsWorkerStore):
                         "state_key": key[1],
                         "state_key": key[1],
                         "event_id": state_id,
                         "event_id": state_id,
                     }
                     }
-                    for key, state_id in curr_state.iteritems()
+                    for key, state_id in iteritems(curr_state)
                 ],
                 ],
             )
             )
 
 

+ 1 - 1
synapse/storage/events_worker.py

@@ -337,7 +337,7 @@ class EventsWorkerStore(SQLBaseStore):
     def _fetch_event_rows(self, txn, events):
     def _fetch_event_rows(self, txn, events):
         rows = []
         rows = []
         N = 200
         N = 200
-        for i in range(1 + len(events) / N):
+        for i in range(1 + len(events) // N):
             evs = events[i * N:(i + 1) * N]
             evs = events[i * N:(i + 1) * N]
             if not evs:
             if not evs:
                 break
                 break

+ 1 - 1
synapse/storage/filtering.py

@@ -44,7 +44,7 @@ class FilteringStore(SQLBaseStore):
             desc="get_user_filter",
             desc="get_user_filter",
         )
         )
 
 
-        defer.returnValue(json.loads(str(def_json).decode("utf-8")))
+        defer.returnValue(json.loads(bytes(def_json).decode("utf-8")))
 
 
     def add_user_filter(self, user_localpart, user_filter):
     def add_user_filter(self, user_localpart, user_filter):
         def_json = encode_canonical_json(user_filter)
         def_json = encode_canonical_json(user_filter)

+ 12 - 4
synapse/storage/keys.py

@@ -17,6 +17,7 @@ from ._base import SQLBaseStore
 from synapse.util.caches.descriptors import cachedInlineCallbacks
 from synapse.util.caches.descriptors import cachedInlineCallbacks
 
 
 from twisted.internet import defer
 from twisted.internet import defer
+import six
 
 
 import OpenSSL
 import OpenSSL
 from signedjson.key import decode_verify_key_bytes
 from signedjson.key import decode_verify_key_bytes
@@ -26,6 +27,13 @@ import logging
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
+# py2 sqlite has buffer hardcoded as only binary type, so we must use it,
+# despite being deprecated and removed in favor of memoryview
+if six.PY2:
+    db_binary_type = buffer
+else:
+    db_binary_type = memoryview
+
 
 
 class KeyStore(SQLBaseStore):
 class KeyStore(SQLBaseStore):
     """Persistence for signature verification keys and tls X.509 certificates
     """Persistence for signature verification keys and tls X.509 certificates
@@ -72,7 +80,7 @@ class KeyStore(SQLBaseStore):
             values={
             values={
                 "from_server": from_server,
                 "from_server": from_server,
                 "ts_added_ms": time_now_ms,
                 "ts_added_ms": time_now_ms,
-                "tls_certificate": buffer(tls_certificate_bytes),
+                "tls_certificate": db_binary_type(tls_certificate_bytes),
             },
             },
             desc="store_server_certificate",
             desc="store_server_certificate",
         )
         )
@@ -92,7 +100,7 @@ class KeyStore(SQLBaseStore):
 
 
         if verify_key_bytes:
         if verify_key_bytes:
             defer.returnValue(decode_verify_key_bytes(
             defer.returnValue(decode_verify_key_bytes(
-                key_id, str(verify_key_bytes)
+                key_id, bytes(verify_key_bytes)
             ))
             ))
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
@@ -135,7 +143,7 @@ class KeyStore(SQLBaseStore):
                 values={
                 values={
                     "from_server": from_server,
                     "from_server": from_server,
                     "ts_added_ms": time_now_ms,
                     "ts_added_ms": time_now_ms,
-                    "verify_key": buffer(verify_key.encode()),
+                    "verify_key": db_binary_type(verify_key.encode()),
                 },
                 },
             )
             )
             txn.call_after(
             txn.call_after(
@@ -172,7 +180,7 @@ class KeyStore(SQLBaseStore):
                 "from_server": from_server,
                 "from_server": from_server,
                 "ts_added_ms": ts_now_ms,
                 "ts_added_ms": ts_now_ms,
                 "ts_valid_until_ms": ts_expires_ms,
                 "ts_valid_until_ms": ts_expires_ms,
-                "key_json": buffer(key_json_bytes),
+                "key_json": db_binary_type(key_json_bytes),
             },
             },
             desc="store_server_keys_json",
             desc="store_server_keys_json",
         )
         )

+ 1 - 1
synapse/storage/prepare_database.py

@@ -26,7 +26,7 @@ logger = logging.getLogger(__name__)
 
 
 # Remember to update this number every time a change is made to database
 # Remember to update this number every time a change is made to database
 # schema files, so the users will be informed on server restarts.
 # schema files, so the users will be informed on server restarts.
-SCHEMA_VERSION = 49
+SCHEMA_VERSION = 50
 
 
 dir_path = os.path.abspath(os.path.dirname(__file__))
 dir_path = os.path.abspath(os.path.dirname(__file__))
 
 

+ 2 - 5
synapse/storage/presence.py

@@ -16,6 +16,7 @@
 from ._base import SQLBaseStore
 from ._base import SQLBaseStore
 from synapse.api.constants import PresenceState
 from synapse.api.constants import PresenceState
 from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
 from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
+from synapse.util import batch_iter
 
 
 from collections import namedtuple
 from collections import namedtuple
 from twisted.internet import defer
 from twisted.internet import defer
@@ -115,11 +116,7 @@ class PresenceStore(SQLBaseStore):
             " AND user_id IN (%s)"
             " AND user_id IN (%s)"
         )
         )
 
 
-        batches = (
-            presence_states[i:i + 50]
-            for i in xrange(0, len(presence_states), 50)
-        )
-        for states in batches:
+        for states in batch_iter(presence_states, 50):
             args = [stream_id]
             args = [stream_id]
             args.extend(s.user_id for s in states)
             args.extend(s.user_id for s in states)
             txn.execute(
             txn.execute(

+ 30 - 29
synapse/storage/receipts.py

@@ -332,6 +332,35 @@ class ReceiptsStore(ReceiptsWorkerStore):
 
 
     def insert_linearized_receipt_txn(self, txn, room_id, receipt_type,
     def insert_linearized_receipt_txn(self, txn, room_id, receipt_type,
                                       user_id, event_id, data, stream_id):
                                       user_id, event_id, data, stream_id):
+        res = self._simple_select_one_txn(
+            txn,
+            table="events",
+            retcols=["topological_ordering", "stream_ordering"],
+            keyvalues={"event_id": event_id},
+            allow_none=True
+        )
+
+        stream_ordering = int(res["stream_ordering"]) if res else None
+
+        # We don't want to clobber receipts for more recent events, so we
+        # have to compare orderings of existing receipts
+        if stream_ordering is not None:
+            sql = (
+                "SELECT stream_ordering, event_id FROM events"
+                " INNER JOIN receipts_linearized as r USING (event_id, room_id)"
+                " WHERE r.room_id = ? AND r.receipt_type = ? AND r.user_id = ?"
+            )
+            txn.execute(sql, (room_id, receipt_type, user_id))
+
+            for so, eid in txn:
+                if int(so) >= stream_ordering:
+                    logger.debug(
+                        "Ignoring new receipt for %s in favour of existing "
+                        "one for later event %s",
+                        event_id, eid,
+                    )
+                    return False
+
         txn.call_after(
         txn.call_after(
             self.get_receipts_for_room.invalidate, (room_id, receipt_type)
             self.get_receipts_for_room.invalidate, (room_id, receipt_type)
         )
         )
@@ -355,34 +384,6 @@ class ReceiptsStore(ReceiptsWorkerStore):
             (user_id, room_id, receipt_type)
             (user_id, room_id, receipt_type)
         )
         )
 
 
-        res = self._simple_select_one_txn(
-            txn,
-            table="events",
-            retcols=["topological_ordering", "stream_ordering"],
-            keyvalues={"event_id": event_id},
-            allow_none=True
-        )
-
-        topological_ordering = int(res["topological_ordering"]) if res else None
-        stream_ordering = int(res["stream_ordering"]) if res else None
-
-        # We don't want to clobber receipts for more recent events, so we
-        # have to compare orderings of existing receipts
-        sql = (
-            "SELECT topological_ordering, stream_ordering, event_id FROM events"
-            " INNER JOIN receipts_linearized as r USING (event_id, room_id)"
-            " WHERE r.room_id = ? AND r.receipt_type = ? AND r.user_id = ?"
-        )
-
-        txn.execute(sql, (room_id, receipt_type, user_id))
-
-        if topological_ordering:
-            for to, so, _ in txn:
-                if int(to) > topological_ordering:
-                    return False
-                elif int(to) == topological_ordering and int(so) >= stream_ordering:
-                    return False
-
         self._simple_delete_txn(
         self._simple_delete_txn(
             txn,
             txn,
             table="receipts_linearized",
             table="receipts_linearized",
@@ -406,7 +407,7 @@ class ReceiptsStore(ReceiptsWorkerStore):
             }
             }
         )
         )
 
 
-        if receipt_type == "m.read" and topological_ordering:
+        if receipt_type == "m.read" and stream_ordering is not None:
             self._remove_old_push_actions_before_txn(
             self._remove_old_push_actions_before_txn(
                 txn,
                 txn,
                 room_id=room_id,
                 room_id=room_id,

+ 37 - 0
synapse/storage/registration.py

@@ -36,6 +36,7 @@ class RegistrationWorkerStore(SQLBaseStore):
             retcols=[
             retcols=[
                 "name", "password_hash", "is_guest",
                 "name", "password_hash", "is_guest",
                 "consent_version", "consent_server_notice_sent",
                 "consent_version", "consent_server_notice_sent",
+                "appservice_id",
             ],
             ],
             allow_none=True,
             allow_none=True,
             desc="get_user_by_id",
             desc="get_user_by_id",
@@ -101,6 +102,13 @@ class RegistrationStore(RegistrationWorkerStore,
             columns=["user_id", "device_id"],
             columns=["user_id", "device_id"],
         )
         )
 
 
+        self.register_background_index_update(
+            "users_creation_ts",
+            index_name="users_creation_ts",
+            table="users",
+            columns=["creation_ts"],
+        )
+
         # we no longer use refresh tokens, but it's possible that some people
         # we no longer use refresh tokens, but it's possible that some people
         # might have a background update queued to build this index. Just
         # might have a background update queued to build this index. Just
         # clear the background update.
         # clear the background update.
@@ -485,6 +493,35 @@ class RegistrationStore(RegistrationWorkerStore,
         ret = yield self.runInteraction("count_users", _count_users)
         ret = yield self.runInteraction("count_users", _count_users)
         defer.returnValue(ret)
         defer.returnValue(ret)
 
 
+    def count_daily_user_type(self):
+        """
+        Counts 1) native non guest users
+               2) native guests users
+               3) bridged users
+        who registered on the homeserver in the past 24 hours
+        """
+        def _count_daily_user_type(txn):
+            yesterday = int(self._clock.time()) - (60 * 60 * 24)
+
+            sql = """
+                SELECT user_type, COALESCE(count(*), 0) AS count FROM (
+                    SELECT
+                    CASE
+                        WHEN is_guest=0 AND appservice_id IS NULL THEN 'native'
+                        WHEN is_guest=1 AND appservice_id IS NULL THEN 'guest'
+                        WHEN is_guest=0 AND appservice_id IS NOT NULL THEN 'bridged'
+                    END AS user_type
+                    FROM users
+                    WHERE creation_ts > ?
+                ) AS t GROUP BY user_type
+            """
+            results = {'native': 0, 'guest': 0, 'bridged': 0}
+            txn.execute(sql, (yesterday,))
+            for row in txn:
+                results[row[0]] = row[1]
+            return results
+        return self.runInteraction("count_daily_user_type", _count_daily_user_type)
+
     @defer.inlineCallbacks
     @defer.inlineCallbacks
     def count_nonbridged_users(self):
     def count_nonbridged_users(self):
         def _count_users(txn):
         def _count_users(txn):

+ 6 - 4
synapse/storage/roommember.py

@@ -30,6 +30,8 @@ from synapse.types import get_domain_from_id
 import logging
 import logging
 import simplejson as json
 import simplejson as json
 
 
+from six import itervalues, iteritems
+
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -272,7 +274,7 @@ class RoomMemberWorkerStore(EventsWorkerStore):
         users_in_room = {}
         users_in_room = {}
         member_event_ids = [
         member_event_ids = [
             e_id
             e_id
-            for key, e_id in current_state_ids.iteritems()
+            for key, e_id in iteritems(current_state_ids)
             if key[0] == EventTypes.Member
             if key[0] == EventTypes.Member
         ]
         ]
 
 
@@ -289,7 +291,7 @@ class RoomMemberWorkerStore(EventsWorkerStore):
                     users_in_room = dict(prev_res)
                     users_in_room = dict(prev_res)
                     member_event_ids = [
                     member_event_ids = [
                         e_id
                         e_id
-                        for key, e_id in context.delta_ids.iteritems()
+                        for key, e_id in iteritems(context.delta_ids)
                         if key[0] == EventTypes.Member
                         if key[0] == EventTypes.Member
                     ]
                     ]
                     for etype, state_key in context.delta_ids:
                     for etype, state_key in context.delta_ids:
@@ -741,7 +743,7 @@ class _JoinedHostsCache(object):
             if state_entry.state_group == self.state_group:
             if state_entry.state_group == self.state_group:
                 pass
                 pass
             elif state_entry.prev_group == self.state_group:
             elif state_entry.prev_group == self.state_group:
-                for (typ, state_key), event_id in state_entry.delta_ids.iteritems():
+                for (typ, state_key), event_id in iteritems(state_entry.delta_ids):
                     if typ != EventTypes.Member:
                     if typ != EventTypes.Member:
                         continue
                         continue
 
 
@@ -771,7 +773,7 @@ class _JoinedHostsCache(object):
                 self.state_group = state_entry.state_group
                 self.state_group = state_entry.state_group
             else:
             else:
                 self.state_group = object()
                 self.state_group = object()
-            self._len = sum(len(v) for v in self.hosts_to_joined_users.itervalues())
+            self._len = sum(len(v) for v in itervalues(self.hosts_to_joined_users))
         defer.returnValue(frozenset(self.hosts_to_joined_users))
         defer.returnValue(frozenset(self.hosts_to_joined_users))
 
 
     def __len__(self):
     def __len__(self):

+ 19 - 0
synapse/storage/schema/delta/50/add_creation_ts_users_index.sql

@@ -0,0 +1,19 @@
+/* Copyright 2018 New Vector Ltd
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+
+INSERT into background_updates (update_name, progress_json)
+    VALUES ('users_creation_ts', '{}');

+ 5 - 4
synapse/storage/search.py

@@ -18,13 +18,14 @@ import logging
 import re
 import re
 import simplejson as json
 import simplejson as json
 
 
+from six import string_types
+
 from twisted.internet import defer
 from twisted.internet import defer
 
 
 from .background_updates import BackgroundUpdateStore
 from .background_updates import BackgroundUpdateStore
 from synapse.api.errors import SynapseError
 from synapse.api.errors import SynapseError
 from synapse.storage.engines import PostgresEngine, Sqlite3Engine
 from synapse.storage.engines import PostgresEngine, Sqlite3Engine
 
 
-
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 SearchEntry = namedtuple('SearchEntry', [
 SearchEntry = namedtuple('SearchEntry', [
@@ -126,7 +127,7 @@ class SearchStore(BackgroundUpdateStore):
                     # skip over it.
                     # skip over it.
                     continue
                     continue
 
 
-                if not isinstance(value, basestring):
+                if not isinstance(value, string_types):
                     # If the event body, name or topic isn't a string
                     # If the event body, name or topic isn't a string
                     # then skip over it
                     # then skip over it
                     continue
                     continue
@@ -447,7 +448,7 @@ class SearchStore(BackgroundUpdateStore):
             "search_msgs", self.cursor_to_dict, sql, *args
             "search_msgs", self.cursor_to_dict, sql, *args
         )
         )
 
 
-        results = filter(lambda row: row["room_id"] in room_ids, results)
+        results = list(filter(lambda row: row["room_id"] in room_ids, results))
 
 
         events = yield self._get_events([r["event_id"] for r in results])
         events = yield self._get_events([r["event_id"] for r in results])
 
 
@@ -602,7 +603,7 @@ class SearchStore(BackgroundUpdateStore):
             "search_rooms", self.cursor_to_dict, sql, *args
             "search_rooms", self.cursor_to_dict, sql, *args
         )
         )
 
 
-        results = filter(lambda row: row["room_id"] in room_ids, results)
+        results = list(filter(lambda row: row["room_id"] in room_ids, results))
 
 
         events = yield self._get_events([r["event_id"] for r in results])
         events = yield self._get_events([r["event_id"] for r in results])
 
 

+ 10 - 2
synapse/storage/signatures.py

@@ -14,6 +14,7 @@
 # limitations under the License.
 # limitations under the License.
 
 
 from twisted.internet import defer
 from twisted.internet import defer
+import six
 
 
 from ._base import SQLBaseStore
 from ._base import SQLBaseStore
 
 
@@ -21,6 +22,13 @@ from unpaddedbase64 import encode_base64
 from synapse.crypto.event_signing import compute_event_reference_hash
 from synapse.crypto.event_signing import compute_event_reference_hash
 from synapse.util.caches.descriptors import cached, cachedList
 from synapse.util.caches.descriptors import cached, cachedList
 
 
+# py2 sqlite has buffer hardcoded as only binary type, so we must use it,
+# despite being deprecated and removed in favor of memoryview
+if six.PY2:
+    db_binary_type = buffer
+else:
+    db_binary_type = memoryview
+
 
 
 class SignatureWorkerStore(SQLBaseStore):
 class SignatureWorkerStore(SQLBaseStore):
     @cached()
     @cached()
@@ -56,7 +64,7 @@ class SignatureWorkerStore(SQLBaseStore):
             for e_id, h in hashes.items()
             for e_id, h in hashes.items()
         }
         }
 
 
-        defer.returnValue(hashes.items())
+        defer.returnValue(list(hashes.items()))
 
 
     def _get_event_reference_hashes_txn(self, txn, event_id):
     def _get_event_reference_hashes_txn(self, txn, event_id):
         """Get all the hashes for a given PDU.
         """Get all the hashes for a given PDU.
@@ -91,7 +99,7 @@ class SignatureStore(SignatureWorkerStore):
             vals.append({
             vals.append({
                 "event_id": event.event_id,
                 "event_id": event.event_id,
                 "algorithm": ref_alg,
                 "algorithm": ref_alg,
-                "hash": buffer(ref_hash_bytes),
+                "hash": db_binary_type(ref_hash_bytes),
             })
             })
 
 
         self._simple_insert_many_txn(
         self._simple_insert_many_txn(

+ 25 - 22
synapse/storage/state.py

@@ -16,6 +16,9 @@
 from collections import namedtuple
 from collections import namedtuple
 import logging
 import logging
 
 
+from six import iteritems, itervalues
+from six.moves import range
+
 from twisted.internet import defer
 from twisted.internet import defer
 
 
 from synapse.storage.background_updates import BackgroundUpdateStore
 from synapse.storage.background_updates import BackgroundUpdateStore
@@ -134,7 +137,7 @@ class StateGroupWorkerStore(SQLBaseStore):
             event_ids,
             event_ids,
         )
         )
 
 
-        groups = set(event_to_groups.itervalues())
+        groups = set(itervalues(event_to_groups))
         group_to_state = yield self._get_state_for_groups(groups)
         group_to_state = yield self._get_state_for_groups(groups)
 
 
         defer.returnValue(group_to_state)
         defer.returnValue(group_to_state)
@@ -166,18 +169,18 @@ class StateGroupWorkerStore(SQLBaseStore):
 
 
         state_event_map = yield self.get_events(
         state_event_map = yield self.get_events(
             [
             [
-                ev_id for group_ids in group_to_ids.itervalues()
-                for ev_id in group_ids.itervalues()
+                ev_id for group_ids in itervalues(group_to_ids)
+                for ev_id in itervalues(group_ids)
             ],
             ],
             get_prev_content=False
             get_prev_content=False
         )
         )
 
 
         defer.returnValue({
         defer.returnValue({
             group: [
             group: [
-                state_event_map[v] for v in event_id_map.itervalues()
+                state_event_map[v] for v in itervalues(event_id_map)
                 if v in state_event_map
                 if v in state_event_map
             ]
             ]
-            for group, event_id_map in group_to_ids.iteritems()
+            for group, event_id_map in iteritems(group_to_ids)
         })
         })
 
 
     @defer.inlineCallbacks
     @defer.inlineCallbacks
@@ -186,7 +189,7 @@ class StateGroupWorkerStore(SQLBaseStore):
         """
         """
         results = {}
         results = {}
 
 
-        chunks = [groups[i:i + 100] for i in xrange(0, len(groups), 100)]
+        chunks = [groups[i:i + 100] for i in range(0, len(groups), 100)]
         for chunk in chunks:
         for chunk in chunks:
             res = yield self.runInteraction(
             res = yield self.runInteraction(
                 "_get_state_groups_from_groups",
                 "_get_state_groups_from_groups",
@@ -347,21 +350,21 @@ class StateGroupWorkerStore(SQLBaseStore):
             event_ids,
             event_ids,
         )
         )
 
 
-        groups = set(event_to_groups.itervalues())
+        groups = set(itervalues(event_to_groups))
         group_to_state = yield self._get_state_for_groups(groups, types)
         group_to_state = yield self._get_state_for_groups(groups, types)
 
 
         state_event_map = yield self.get_events(
         state_event_map = yield self.get_events(
-            [ev_id for sd in group_to_state.itervalues() for ev_id in sd.itervalues()],
+            [ev_id for sd in itervalues(group_to_state) for ev_id in itervalues(sd)],
             get_prev_content=False
             get_prev_content=False
         )
         )
 
 
         event_to_state = {
         event_to_state = {
             event_id: {
             event_id: {
                 k: state_event_map[v]
                 k: state_event_map[v]
-                for k, v in group_to_state[group].iteritems()
+                for k, v in iteritems(group_to_state[group])
                 if v in state_event_map
                 if v in state_event_map
             }
             }
-            for event_id, group in event_to_groups.iteritems()
+            for event_id, group in iteritems(event_to_groups)
         }
         }
 
 
         defer.returnValue({event: event_to_state[event] for event in event_ids})
         defer.returnValue({event: event_to_state[event] for event in event_ids})
@@ -384,12 +387,12 @@ class StateGroupWorkerStore(SQLBaseStore):
             event_ids,
             event_ids,
         )
         )
 
 
-        groups = set(event_to_groups.itervalues())
+        groups = set(itervalues(event_to_groups))
         group_to_state = yield self._get_state_for_groups(groups, types)
         group_to_state = yield self._get_state_for_groups(groups, types)
 
 
         event_to_state = {
         event_to_state = {
             event_id: group_to_state[group]
             event_id: group_to_state[group]
-            for event_id, group in event_to_groups.iteritems()
+            for event_id, group in iteritems(event_to_groups)
         }
         }
 
 
         defer.returnValue({event: event_to_state[event] for event in event_ids})
         defer.returnValue({event: event_to_state[event] for event in event_ids})
@@ -503,7 +506,7 @@ class StateGroupWorkerStore(SQLBaseStore):
         got_all = is_all or not missing_types
         got_all = is_all or not missing_types
 
 
         return {
         return {
-            k: v for k, v in state_dict_ids.iteritems()
+            k: v for k, v in iteritems(state_dict_ids)
             if include(k[0], k[1])
             if include(k[0], k[1])
         }, missing_types, got_all
         }, missing_types, got_all
 
 
@@ -562,12 +565,12 @@ class StateGroupWorkerStore(SQLBaseStore):
 
 
             # Now we want to update the cache with all the things we fetched
             # Now we want to update the cache with all the things we fetched
             # from the database.
             # from the database.
-            for group, group_state_dict in group_to_state_dict.iteritems():
+            for group, group_state_dict in iteritems(group_to_state_dict):
                 state_dict = results[group]
                 state_dict = results[group]
 
 
                 state_dict.update(
                 state_dict.update(
                     ((intern_string(k[0]), intern_string(k[1])), to_ascii(v))
                     ((intern_string(k[0]), intern_string(k[1])), to_ascii(v))
-                    for k, v in group_state_dict.iteritems()
+                    for k, v in iteritems(group_state_dict)
                 )
                 )
 
 
                 self._state_group_cache.update(
                 self._state_group_cache.update(
@@ -654,7 +657,7 @@ class StateGroupWorkerStore(SQLBaseStore):
                             "state_key": key[1],
                             "state_key": key[1],
                             "event_id": state_id,
                             "event_id": state_id,
                         }
                         }
-                        for key, state_id in delta_ids.iteritems()
+                        for key, state_id in iteritems(delta_ids)
                     ],
                     ],
                 )
                 )
             else:
             else:
@@ -669,7 +672,7 @@ class StateGroupWorkerStore(SQLBaseStore):
                             "state_key": key[1],
                             "state_key": key[1],
                             "event_id": state_id,
                             "event_id": state_id,
                         }
                         }
-                        for key, state_id in current_state_ids.iteritems()
+                        for key, state_id in iteritems(current_state_ids)
                     ],
                     ],
                 )
                 )
 
 
@@ -794,11 +797,11 @@ class StateStore(StateGroupWorkerStore, BackgroundUpdateStore):
                     "state_group": state_group_id,
                     "state_group": state_group_id,
                     "event_id": event_id,
                     "event_id": event_id,
                 }
                 }
-                for event_id, state_group_id in state_groups.iteritems()
+                for event_id, state_group_id in iteritems(state_groups)
             ],
             ],
         )
         )
 
 
-        for event_id, state_group_id in state_groups.iteritems():
+        for event_id, state_group_id in iteritems(state_groups):
             txn.call_after(
             txn.call_after(
                 self._get_state_group_for_event.prefill,
                 self._get_state_group_for_event.prefill,
                 (event_id,), state_group_id
                 (event_id,), state_group_id
@@ -826,7 +829,7 @@ class StateStore(StateGroupWorkerStore, BackgroundUpdateStore):
 
 
         def reindex_txn(txn):
         def reindex_txn(txn):
             new_last_state_group = last_state_group
             new_last_state_group = last_state_group
-            for count in xrange(batch_size):
+            for count in range(batch_size):
                 txn.execute(
                 txn.execute(
                     "SELECT id, room_id FROM state_groups"
                     "SELECT id, room_id FROM state_groups"
                     " WHERE ? < id AND id <= ?"
                     " WHERE ? < id AND id <= ?"
@@ -884,7 +887,7 @@ class StateStore(StateGroupWorkerStore, BackgroundUpdateStore):
                         # of keys
                         # of keys
 
 
                         delta_state = {
                         delta_state = {
-                            key: value for key, value in curr_state.iteritems()
+                            key: value for key, value in iteritems(curr_state)
                             if prev_state.get(key, None) != value
                             if prev_state.get(key, None) != value
                         }
                         }
 
 
@@ -924,7 +927,7 @@ class StateStore(StateGroupWorkerStore, BackgroundUpdateStore):
                                     "state_key": key[1],
                                     "state_key": key[1],
                                     "event_id": state_id,
                                     "event_id": state_id,
                                 }
                                 }
-                                for key, state_id in delta_state.iteritems()
+                                for key, state_id in iteritems(delta_state)
                             ],
                             ],
                         )
                         )
 
 

+ 9 - 1
synapse/storage/transactions.py

@@ -17,6 +17,7 @@ from ._base import SQLBaseStore
 from synapse.util.caches.descriptors import cached
 from synapse.util.caches.descriptors import cached
 
 
 from twisted.internet import defer
 from twisted.internet import defer
+import six
 
 
 from canonicaljson import encode_canonical_json
 from canonicaljson import encode_canonical_json
 
 
@@ -25,6 +26,13 @@ from collections import namedtuple
 import logging
 import logging
 import simplejson as json
 import simplejson as json
 
 
+# py2 sqlite has buffer hardcoded as only binary type, so we must use it,
+# despite being deprecated and removed in favor of memoryview
+if six.PY2:
+    db_binary_type = buffer
+else:
+    db_binary_type = memoryview
+
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -110,7 +118,7 @@ class TransactionStore(SQLBaseStore):
                 "transaction_id": transaction_id,
                 "transaction_id": transaction_id,
                 "origin": origin,
                 "origin": origin,
                 "response_code": code,
                 "response_code": code,
-                "response_json": buffer(encode_canonical_json(response_dict)),
+                "response_json": db_binary_type(encode_canonical_json(response_dict)),
                 "ts": self._clock.time_msec(),
                 "ts": self._clock.time_msec(),
             },
             },
             or_ignore=True,
             or_ignore=True,

+ 5 - 3
synapse/storage/user_directory.py

@@ -22,6 +22,8 @@ from synapse.api.constants import EventTypes, JoinRules
 from synapse.storage.engines import PostgresEngine, Sqlite3Engine
 from synapse.storage.engines import PostgresEngine, Sqlite3Engine
 from synapse.types import get_domain_from_id, get_localpart_from_id
 from synapse.types import get_domain_from_id, get_localpart_from_id
 
 
+from six import iteritems
+
 import re
 import re
 import logging
 import logging
 
 
@@ -100,7 +102,7 @@ class UserDirectoryStore(SQLBaseStore):
                     user_id, get_localpart_from_id(user_id), get_domain_from_id(user_id),
                     user_id, get_localpart_from_id(user_id), get_domain_from_id(user_id),
                     profile.display_name,
                     profile.display_name,
                 )
                 )
-                for user_id, profile in users_with_profile.iteritems()
+                for user_id, profile in iteritems(users_with_profile)
             )
             )
         elif isinstance(self.database_engine, Sqlite3Engine):
         elif isinstance(self.database_engine, Sqlite3Engine):
             sql = """
             sql = """
@@ -112,7 +114,7 @@ class UserDirectoryStore(SQLBaseStore):
                     user_id,
                     user_id,
                     "%s %s" % (user_id, p.display_name,) if p.display_name else user_id
                     "%s %s" % (user_id, p.display_name,) if p.display_name else user_id
                 )
                 )
-                for user_id, p in users_with_profile.iteritems()
+                for user_id, p in iteritems(users_with_profile)
             )
             )
         else:
         else:
             # This should be unreachable.
             # This should be unreachable.
@@ -130,7 +132,7 @@ class UserDirectoryStore(SQLBaseStore):
                         "display_name": profile.display_name,
                         "display_name": profile.display_name,
                         "avatar_url": profile.avatar_url,
                         "avatar_url": profile.avatar_url,
                     }
                     }
-                    for user_id, profile in users_with_profile.iteritems()
+                    for user_id, profile in iteritems(users_with_profile)
                 ]
                 ]
             )
             )
             for user_id in users_with_profile:
             for user_id in users_with_profile:

+ 18 - 0
synapse/util/__init__.py

@@ -20,6 +20,8 @@ from twisted.internet import defer, reactor, task
 import time
 import time
 import logging
 import logging
 
 
+from itertools import islice
+
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
 
 
@@ -79,3 +81,19 @@ class Clock(object):
         except Exception:
         except Exception:
             if not ignore_errs:
             if not ignore_errs:
                 raise
                 raise
+
+
+def batch_iter(iterable, size):
+    """batch an iterable up into tuples with a maximum size
+
+    Args:
+        iterable (iterable): the iterable to slice
+        size (int): the maximum batch size
+
+    Returns:
+        an iterator over the chunks
+    """
+    # make sure we can deal with iterables like lists too
+    sourceiter = iter(iterable)
+    # call islice until it returns an empty tuple
+    return iter(lambda: tuple(islice(sourceiter, size)), ())

+ 69 - 18
synapse/util/caches/__init__.py

@@ -13,28 +13,77 @@
 # See the License for the specific language governing permissions and
 # See the License for the specific language governing permissions and
 # limitations under the License.
 # limitations under the License.
 
 
-import synapse.metrics
+from prometheus_client.core import Gauge, REGISTRY, GaugeMetricFamily
+
 import os
 import os
 
 
-CACHE_SIZE_FACTOR = float(os.environ.get("SYNAPSE_CACHE_FACTOR", 0.5))
+from six.moves import intern
+import six
 
 
-metrics = synapse.metrics.get_metrics_for("synapse.util.caches")
+CACHE_SIZE_FACTOR = float(os.environ.get("SYNAPSE_CACHE_FACTOR", 0.5))
 
 
 caches_by_name = {}
 caches_by_name = {}
-# cache_counter = metrics.register_cache(
-#     "cache",
-#     lambda: {(name,): len(caches_by_name[name]) for name in caches_by_name.keys()},
-#     labels=["name"],
-# )
-
-
-def register_cache(name, cache):
-    caches_by_name[name] = cache
-    return metrics.register_cache(
-        "cache",
-        lambda: len(cache),
-        name,
-    )
+collectors_by_name = {}
+
+cache_size = Gauge("synapse_util_caches_cache:size", "", ["name"])
+cache_hits = Gauge("synapse_util_caches_cache:hits", "", ["name"])
+cache_evicted = Gauge("synapse_util_caches_cache:evicted_size", "", ["name"])
+cache_total = Gauge("synapse_util_caches_cache:total", "", ["name"])
+
+response_cache_size = Gauge("synapse_util_caches_response_cache:size", "", ["name"])
+response_cache_hits = Gauge("synapse_util_caches_response_cache:hits", "", ["name"])
+response_cache_evicted = Gauge(
+    "synapse_util_caches_response_cache:evicted_size", "", ["name"]
+)
+response_cache_total = Gauge("synapse_util_caches_response_cache:total", "", ["name"])
+
+
+def register_cache(cache_type, cache_name, cache):
+
+    # Check if the metric is already registered. Unregister it, if so.
+    # This usually happens during tests, as at runtime these caches are
+    # effectively singletons.
+    metric_name = "cache_%s_%s" % (cache_type, cache_name)
+    if metric_name in collectors_by_name.keys():
+        REGISTRY.unregister(collectors_by_name[metric_name])
+
+    class CacheMetric(object):
+
+        hits = 0
+        misses = 0
+        evicted_size = 0
+
+        def inc_hits(self):
+            self.hits += 1
+
+        def inc_misses(self):
+            self.misses += 1
+
+        def inc_evictions(self, size=1):
+            self.evicted_size += size
+
+        def describe(self):
+            return []
+
+        def collect(self):
+            if cache_type == "response_cache":
+                response_cache_size.labels(cache_name).set(len(cache))
+                response_cache_hits.labels(cache_name).set(self.hits)
+                response_cache_evicted.labels(cache_name).set(self.evicted_size)
+                response_cache_total.labels(cache_name).set(self.hits + self.misses)
+            else:
+                cache_size.labels(cache_name).set(len(cache))
+                cache_hits.labels(cache_name).set(self.hits)
+                cache_evicted.labels(cache_name).set(self.evicted_size)
+                cache_total.labels(cache_name).set(self.hits + self.misses)
+
+            yield GaugeMetricFamily("__unused", "")
+
+    metric = CacheMetric()
+    REGISTRY.register(metric)
+    caches_by_name[cache_name] = cache
+    collectors_by_name[metric_name] = metric
+    return metric
 
 
 
 
 KNOWN_KEYS = {
 KNOWN_KEYS = {
@@ -66,7 +115,9 @@ def intern_string(string):
         return None
         return None
 
 
     try:
     try:
-        string = string.encode("ascii")
+        if six.PY2:
+            string = string.encode("ascii")
+
         return intern(string)
         return intern(string)
     except UnicodeEncodeError:
     except UnicodeEncodeError:
         return string
         return string

+ 10 - 6
synapse/util/caches/descriptors.py

@@ -31,6 +31,9 @@ import functools
 import inspect
 import inspect
 import threading
 import threading
 
 
+from six import string_types, itervalues
+import six
+
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
 
 
@@ -80,7 +83,7 @@ class Cache(object):
         self.name = name
         self.name = name
         self.keylen = keylen
         self.keylen = keylen
         self.thread = None
         self.thread = None
-        self.metrics = register_cache(name, self.cache)
+        self.metrics = register_cache("cache", name, self.cache)
 
 
     def _on_evicted(self, evicted_count):
     def _on_evicted(self, evicted_count):
         self.metrics.inc_evictions(evicted_count)
         self.metrics.inc_evictions(evicted_count)
@@ -205,7 +208,7 @@ class Cache(object):
     def invalidate_all(self):
     def invalidate_all(self):
         self.check_thread()
         self.check_thread()
         self.cache.clear()
         self.cache.clear()
-        for entry in self._pending_deferred_cache.itervalues():
+        for entry in itervalues(self._pending_deferred_cache):
             entry.invalidate()
             entry.invalidate()
         self._pending_deferred_cache.clear()
         self._pending_deferred_cache.clear()
 
 
@@ -392,9 +395,10 @@ class CacheDescriptor(_CacheDescriptorBase):
 
 
                 ret.addErrback(onErr)
                 ret.addErrback(onErr)
 
 
-                # If our cache_key is a string, try to convert to ascii to save
-                # a bit of space in large caches
-                if isinstance(cache_key, basestring):
+                # If our cache_key is a string on py2, try to convert to ascii
+                # to save a bit of space in large caches. Py3 does this
+                # internally automatically.
+                if six.PY2 and isinstance(cache_key, string_types):
                     cache_key = to_ascii(cache_key)
                     cache_key = to_ascii(cache_key)
 
 
                 result_d = ObservableDeferred(ret, consumeErrors=True)
                 result_d = ObservableDeferred(ret, consumeErrors=True)
@@ -565,7 +569,7 @@ class CacheListDescriptor(_CacheDescriptorBase):
                     return results
                     return results
 
 
                 return logcontext.make_deferred_yieldable(defer.gatherResults(
                 return logcontext.make_deferred_yieldable(defer.gatherResults(
-                    cached_defers.values(),
+                    list(cached_defers.values()),
                     consumeErrors=True,
                     consumeErrors=True,
                 ).addCallback(update_results_dict).addErrback(
                 ).addCallback(update_results_dict).addErrback(
                     unwrapFirstError
                     unwrapFirstError

+ 1 - 1
synapse/util/caches/dictionary_cache.py

@@ -55,7 +55,7 @@ class DictionaryCache(object):
             __slots__ = []
             __slots__ = []
 
 
         self.sentinel = Sentinel()
         self.sentinel = Sentinel()
-        self.metrics = register_cache(name, self.cache)
+        self.metrics = register_cache("dictionary", name, self.cache)
 
 
     def check_thread(self):
     def check_thread(self):
         expected_thread = self.thread
         expected_thread = self.thread

+ 2 - 2
synapse/util/caches/expiringcache.py

@@ -52,12 +52,12 @@ class ExpiringCache(object):
 
 
         self._cache = OrderedDict()
         self._cache = OrderedDict()
 
 
-        self.metrics = register_cache(cache_name, self)
-
         self.iterable = iterable
         self.iterable = iterable
 
 
         self._size_estimate = 0
         self._size_estimate = 0
 
 
+        self.metrics = register_cache("expiring", cache_name, self)
+
     def start(self):
     def start(self):
         if not self._expiry_ms:
         if not self._expiry_ms:
             # Don't bother starting the loop if things never expire
             # Don't bother starting the loop if things never expire

+ 6 - 5
synapse/util/caches/response_cache.py

@@ -17,7 +17,7 @@ import logging
 from twisted.internet import defer
 from twisted.internet import defer
 
 
 from synapse.util.async import ObservableDeferred
 from synapse.util.async import ObservableDeferred
-from synapse.util.caches import metrics as cache_metrics
+from synapse.util.caches import register_cache
 from synapse.util.logcontext import make_deferred_yieldable, run_in_background
 from synapse.util.logcontext import make_deferred_yieldable, run_in_background
 
 
 logger = logging.getLogger(__name__)
 logger = logging.getLogger(__name__)
@@ -38,15 +38,16 @@ class ResponseCache(object):
         self.timeout_sec = timeout_ms / 1000.
         self.timeout_sec = timeout_ms / 1000.
 
 
         self._name = name
         self._name = name
-        self._metrics = cache_metrics.register_cache(
-            "response_cache",
-            size_callback=lambda: self.size(),
-            cache_name=name,
+        self._metrics = register_cache(
+            "response_cache", name, self
         )
         )
 
 
     def size(self):
     def size(self):
         return len(self.pending_result_cache)
         return len(self.pending_result_cache)
 
 
+    def __len__(self):
+        return self.size()
+
     def get(self, key):
     def get(self, key):
         """Look up the given key.
         """Look up the given key.
 
 

+ 1 - 1
synapse/util/caches/stream_change_cache.py

@@ -38,7 +38,7 @@ class StreamChangeCache(object):
         self._cache = sorteddict()
         self._cache = sorteddict()
         self._earliest_known_stream_pos = current_stream_pos
         self._earliest_known_stream_pos = current_stream_pos
         self.name = name
         self.name = name
-        self.metrics = register_cache(self.name, self._cache)
+        self.metrics = register_cache("cache", self.name, self._cache)
 
 
         for entity, stream_pos in prefilled_cache.items():
         for entity, stream_pos in prefilled_cache.items():
             self.entity_has_changed(entity, stream_pos)
             self.entity_has_changed(entity, stream_pos)

Энэ ялгаанд хэт олон файл өөрчлөгдсөн тул зарим файлыг харуулаагүй болно