Skip to content

runtime caches & perf considerations #196

@jerch

Description

@jerch

While the resolver already holds most dependency graph aspects in flat data structures for fast access, we still have a few methods, that have to calculate their final values based on the given update_fields argument at runtime:

  • get_model_updates (already cached by a previous PR)
  • get_local_mro
  • get_select_related
  • get_prefetch_related
  • get_querysize

Every update_dependent call has to touch these methods once, so lets check their overhead:

from time import time
from exampleapp.models import SelfRef
from computedfields.models import active_resolver

def test():
    start = time()
    uf = set(['name', 'xy'])
    # simulate 1M calls
    for _ in range(1000000):
        active_resolver.get_model_updates(SelfRef, uf)
        fs = active_resolver.get_local_mro(SelfRef, uf)
        active_resolver.get_select_related(SelfRef, fs)
        active_resolver.get_prefetch_related(SelfRef, fs)
        active_resolver.get_querysize(SelfRef, fs)
    end = time()
    return end - start

test()

~15s to get those values 1M times. 😱

Since the final values of those methods dont change anymore for a given update_fields argument, we should use runtime caches:

~1.7s now, much better. 😺

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions