Python-Django-memcached
Pseudocode
given a URL, try finding that page in the cache if the page is in the cache: return the cached page else: generate the page save the generated page in the cache (for next time) return the generated page
Install
After installing Memcached itself, you’ll need to install a Memcached binding. There are several Python Memcached bindings available; the two most common are python-memcached
and pylibmc
.
$ sudo apt-get install memcached $ sudo apt-get install python-memcached
To use Memcached with Django:
Set
BACKEND
todjango.core.cache.backends.memcached.MemcachedCache
or django.core.cache.backends.memcached.PyLibMCCache (depending on your chosen memcached binding)Set
LOCATION
to ip:port values, where ip is the IP address of the Memcached daemon and port is the port on which Memcached is running, or to a unix:path value, where path is the path to a Memcached Unix socket file.
In this example, Memcached is running on localhost (127.0.0.1) port 11211, using the python-memcached
binding:
# settings.py CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': [ '127.0.0.1:11211', ] } }
In this example, Memcached is available through a local Unix socket file /tmp/memcached.sock using the python-memcached binding:
CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'unix:/tmp/memcached.sock', } }
Local-memory caching
This is the default cache if another is not specified in your settings file. If you want the speed advantages of in-memory caching but don’t have the capability of running Memcached, consider the local-memory cache backend. This cache is per-process (see below) and thread-safe. To use it, set BACKEND to "django.core.cache.backends.locmem.LocMemCache". For example:
CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', 'LOCATION': 'unique-snowflake', } }
The cache LOCATION
is used to identify individual memory stores. If you only have one locmem cache, you can omit the LOCATION; however, if you have more than one local memory cache, you will need to assign a name to at least one of them in order to keep them separate.
Note that each process will have its own private cache instance, which means no cross-process caching is possible. This obviously also means the local memory cache isn’t particularly memory-efficient, so it’s probably not a good choice for production environments. It’s nice for development.
Usage
Decorator
A more granular way to use the caching framework is by caching the output of individual views. django.views.decorators.cache defines a cache_page decorator that will automatically cache the view’s response for you. It’s easy to use:
from django.views.decorators.cache import cache_page @cache_page(60 * 15) def my_view(request): ...
cache_page takes a single argument: the cache timeout, in seconds
. In the above example, the result of the my_view() view will be cached for 15 minutes. (Note that we’ve written it as 60 15 for the purpose of readability. 60 15 will be evaluated to 900 – that is, 15 minutes multiplied by 60 seconds per minute.)
The per-view cache, like the per-site cache, is keyed off of the URL. If multiple URLs point at the same view, each URL will be cached separately. Continuing the my_view example, if your URLconf looks like this:
urlpatterns = [ url(r'^foo/([0-9]{1,2})/$', my_view), ]
then requests to /foo/1/ and /foo/23/ will be cached separately, as you may expect. But once a particular URL (e.g., /foo/23/) has been requested, subsequent requests to that URL will use the cache.
URLconf
urlpatterns = [ url(r'^foo/([0-9]{1,2})/$', my_view), ]
Here’s the same thing, with my_view wrapped in cache_page:
from django.views.decorators.cache import cache_page urlpatterns = [ url(r'^foo/([0-9]{1,2})/$', cache_page(60 * 15)(my_view)), ]
More Choice
from django.core.cache import cache def heavy_view(request): cache_key = 'my_heavy_view_cache_key' cache_time = 900 # time to live in seconds result = cache.get(cache_key) if not result: result = # some calculations here cache.set(cache_key, result, cache_time) return result