jmoiron / johnny-cache Goto Github PK
View Code? Open in Web Editor NEWjohnny cache django caching framework
License: MIT License
johnny cache django caching framework
License: MIT License
this increases traffic to memcached and can be avoided.
I have following statement at the beginning of my periodic celery task file:
from johnny.cache import enable
enable()
However, even I remove all johnny cache related settings (middleware, backend definitions etc), johnny cache was still enabled for entire project as you can guess. To disable entirely, I had to remove all enable statements.
I think proper solution would make enable() not work when JOHNNY_CACHE set to False in backend definitions (which actually don't benefit). I thought I have disable johnny cache when I did this but it was still there when I observed what was happening in memcached (can be seen using memcached -vv).
I don't think johnny-cache should ever cache south_migrationhistory. If I manually delete and recreate my DB, I can get into this very confusing situation:
$ sudo -u postgres psql db_drawbridge
psql (9.3.2)
Type "help" for help.
db_drawbridge=# select * from south_migrationhistory;
id | app_name | migration | applied
----+----------+-----------+---------
(0 rows)
db_drawbridge=# \q
$ python manage.py migrate
GhostMigrations:
! These migrations are in the database but not on disk:
<self_serve: 0011_auto__add_field_userneedingconfirmation_tier>
! I'm not trusting myself; either fix this yourself by fiddling
! with the south_migrationhistory table, or pass --delete-ghost-migrations
! to South to have it delete ALL of these records (this may not be good).
Since this table is never queried whilst the site is running, there's no performance benefit from caching it.
It would be great to have support for Django 1.6. Most obvious change is that django.core.cache.backends.memcached.CacheClass
was renamed BaseMemcachedCache
. Not sure about the rest.
Johnny does not properly work with F() expression when it comes to dates, here is an example:
>>> from django.db.models import F
>>> from johnny import cache
>>> cache.enable()
>>> SomeData.objects.create(period_start=datetime.datetime.now() - datetime.timedelta(seconds=3600), period_end=datetime.datetime.now())
<SomeData: SomeData object>
>>> SomeData.objects.filter(period_start__lt=F('period_end') - datetime.timedelta(seconds=1))
[]
>>> cache.disable()
>>> SomeData.objects.filter(period_start__lt=F('period_end') - datetime.timedelta(seconds=1))
[<SomeData: SomeData object>]
First query was supposed to return newly created SomeData objects because period_start is 3600 seconds behind period_end. After disabling Johnny it works fine.
Django 1.4.3, Johnny 1.4.
Django 1.8 no longer has the class SqlDateCompiler, so we might have to modify the code to accommodate that.
https://github.com/jmoiron/johnny-cache/blob/master/johnny/cache.py#L412
It looks like ordering_aliases
has moved out of the Query
object in db/models/sql/query.py
into the SQLCompiler
object in db/models/sql/compiler.py
, as of django/django@2f35c6f.
As a result, the line in johnny/cache.py
referencing cls.query.ordering_aliases
fail with an AttributeError
.
I'm not entirely sure how easy this is to fix. It looks like cls
refers to SQLCompiler
and so now has an ordering_aliases
property, but I don't know enough about these parts of Django and johnny-cache to confirm that swapping out cls.query.ordering_aliases
for cls.ordering_aliases
is a viable solution. But it seems to work.
I installed johnny-cache 1.4 in Django 1.6 using the instructions but does not work:
# add johnny's middleware
MIDDLEWARE_CLASSES = (
'johnny.middleware.LocalStoreClearMiddleware',
'johnny.middleware.QueryCacheMiddleware',
# ...
)
# some johnny settings
CACHES = {
'default' : dict(
BACKEND = 'johnny.backends.memcached.MemcachedCache',
LOCATION = ['127.0.0.1:11211'],
JOHNNY_CACHE = True,
)
}
JOHNNY_MIDDLEWARE_KEY_PREFIX='jc_myproj'
the error that comes out is:
$ python manage.py runserver
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/diegoug/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/home/diegoug/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/diegoug/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 272, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/home/diegoug/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 75, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/home/diegoug/local/lib/python2.7/site-packages/django/utils/importlib.py", line 40, in import_module
__import__(name)
File "/home/diegoug/local/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/runserver.py", line 6, in <module>
from django.contrib.staticfiles.handlers import StaticFilesHandler
File "/home/diegoug/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py", line 8, in <module>
from django.contrib.staticfiles.views import serve
File "/home/diegoug/local/lib/python2.7/site-packages/django/contrib/staticfiles/views.py", line 15, in <module>
from django.contrib.staticfiles import finders
File "/home/diegoug/local/lib/python2.7/site-packages/django/contrib/staticfiles/finders.py", line 12, in <module>
from django.contrib.staticfiles.storage import AppStaticStorage
File "/home/diegoug/local/lib/python2.7/site-packages/django/contrib/staticfiles/storage.py", line 8, in <module>
from django.core.cache import (get_cache, InvalidCacheBackendError,
File "/home/diegoug/local/lib/python2.7/site-packages/django/core/cache/__init__.py", line 138, in <module>
cache = get_cache(DEFAULT_CACHE_ALIAS)
File "/home/diegoug/local/lib/python2.7/site-packages/django/core/cache/__init__.py", line 130, in get_cache
"Could not find backend '%s': %s" % (backend, e))
django.core.cache.backends.base.InvalidCacheBackendError: Could not find backend 'johnny.backends.memcached.MemcachedCache': 'module' object has no attribute 'CacheClass'
Investigate and say the problem is more of Django, I would like to fix it, because this is a great tool and it would be great help.
If I have from johnny.cache import enable; enable()
in the project root __init__.py
, calling the fab
command fails because DJANGO_SETTINGS_MODULE
is undefined.
$ fab
Traceback (most recent call last):
File "/my/project/path/venv/src/fabric/fabric/main.py", line 639, in main
docstring, callables, default = load_fabfile(fabfile)
File "/my/project/path/venv/src/fabric/fabric/main.py", line 163, in load_fabfile
imported = importer(os.path.splitext(fabfile)[0])
File "/my/project/path/fabfile.py", line 8, in <module>
from my_project.util.deploy import crontab as crontab_tools
File "/my/project/path/my_project/__init__.py", line 2, in <module>
from johnny.cache import enable
File "/my/project/path/venv/lib/python2.7/site-packages/johnny/cache.py", line 14, in <module>
from johnny import settings
File "/my/project/path/venv/lib/python2.7/site-packages/johnny/settings.py", line 5, in <module>
from django.core.cache import get_cache, cache
File "/my/project/path/venv/lib/python2.7/site-packages/django/core/cache/__init__.py", line 76, in <module>
if not settings.CACHES:
File "/my/project/path/venv/lib/python2.7/site-packages/django/utils/functional.py", line 184, in inner
self._setup()
File "/my/project/path/venv/lib/python2.7/site-packages/django/conf/__init__.py", line 40, in _setup
raise ImportError("Settings cannot be imported, because environment variable %s is undefined." % ENVIRONMENT_VARIABLE)
ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
I have johnny-cache 1.4 and Fabric 1.6.0.
Scenario: cx_Oracle + Johnny Cache + django.qs.iterator() + table with CLOB column -> produces error:
ProgrammingError: LOB variable no longer valid after subsequent fetch
(more details http://starship.python.net/crew/atuining/cx_Oracle/html/lobobj.html)
The issue is that cache.py in newfun() converts iterator object to list in line 376:
371 if hasattr(val, '__iter__'):
372 #Can't permanently cache lazy iterables without creating
373 #a cacheable data structure. Note that this makes them
374 #no longer lazy...
375 #todo - create a smart iterable wrapper
376 -> val = list(val)
377 if key is not None:
378 self.cache_backend.set(key, val, settings.MIDDLEWARE_SECONDS, db)
379 return val
and after the rows are fetched, for all but last row CLOB columns become unreadable producing mentioned error.
Therefore I tried to avoid the issue by puting this table in BLACKLIST but the conversion iterator -> list is done anyway.
Suggestion: do not convert iterator -> list in cases where key is not set, i.e. put "if hasattr" into the "if key is not" block.
I use some manage.py command to parse and upload some data to my database. johnny-cache cached my data once and after updates my data cache doesn't update. In database I have actual data, but my site show old data from cache.
I have to restart my memcached server to solve that problem after each data update. Anyone has similar problem?
Hi all,
for proper apps setting on production servers in our cloud service is mandatory to download from pypi package systems instead of direct HTTPS downloading (or other methods). Is there any schedule to an updated johnny-cache==1.4.1 version with all the goodies done since last version? Or at least a jc==1.5.0betaX version if tests are not passing, it would help integration in projects.
Thanx
If DISABLE_QUERYSET_CACHE
is set to True
, QueryCacheMiddleware
is a no-op. However, celery_enable_all
(specifically, johnny.cache.utils.prerun_handler
) calls patch
indiscriminately.
As described in cache-machine's docs (http://cache-machine.readthedocs.org/en/latest/#count-queries), COUNT
queries are not reliably invalidated and are not consistent.
@jmoiron it seems that you've published your SECRET_KEY
in settings.py
.
Here are the details:
Unfortunatly we havent had the time to investigate further, but we managed to find a case where johnny (or rather the runserver) goes crazy and hogs over all 16 GB of RAM and all swap on our dev server. The server freezes with the message "task jbd2/dm-0-8:584 blocked for more than 120 seconds." and becomes unresponsive.
We spent the rest of that day thinking that the disk was about to die and let it do a fsck ... -cc over night. When no errors where found we started looking in to the Django project specifics. Some parts of the project worked fine, but one page generated the crash described above.
I can reproduce it in my dev env pretty easy, but its hard to provide any more information since its project specific. When I remove johnny-cache from the project, everything works fine.
While this might not be a johnny-cache problem but something else underneath, i still thought it would be worth mentioning here.
We're using johnny cache in a multi-web server environment and we're running into some strange issues in which query caches aren't being invalidated. Unfortunately, we haven't been able to figure out how to reproduce it though it does happen consistently.
We're set up at EC2 and we have 2 web servers set up using nginx + uwsgi. We're only using one db (pooled using pgbouncer on each webserver) and have johnny cache configured to use the memcached backend with ElastiCache.
Most of our code uses Django's default autocommit mode but we do have some methods where we use the @transaction.commit_on_success decorator.
We also have some cron jobs (set up using django-cronjobs) which we run and which need to invalidate caches. In the methods for those cron jobs, we call johhn.cache.enable() to set up johnny cache before we do any other work.
The scenario below works sometimes but then stops working after some time. Once it stops working, we can manually invalidate the object's cache and it starts working again or we can cycle uwsgi on the webservers which also gets things working again.
Step 2 is what stops working in the sense that new MyObjects don't show up in the list.
We've spent hours trying to track this down so any help you can provide would be great!
In cache.py:KeyHandler.get_multi_generation(...) a key is formed by mixing the current generations of all the tables involved. This means that if any of the tables are updated a new key will be derived.
What doesn't make sense to me is that this key is then used in a key -> generation lookup. Why not just use this derived value (currently called key) as the generation for that combination of tables? It would save a get and possibly a set operation.
In other words if we have table A with generation Ag1 and table B with generation Bg1 then it is sufficient for the generation of (A, B) to be calculated as (Ag1 ^ Bg1) and there is no need to store/lookup a random value using (Ag1 ^ Bg1) as the key.
Or am I missing something?
Happens other places inside TransactionManager, anywhere this code is used:
c = self.local.mget('%s_%s_*' % (self.prefix, self._trunc_using(using)))
The problem is that the 'using' variable is the DB key, not the substituted JOHNNY_CACHE_KEY value. It is done correctly during the KeyGen.gen_table_key:
db = unicode(settings.DB_CACHE_KEYS[db])
...
return '%s_%s_table_%s' % (self.prefix, db, table)
which puts them into self.local correctly during the set. But since we are looking for keys to flush that are not there, the real ones never get flushed
When using johnny cache with django's TestCase, johnny cache's transaction patching gets overridden by TestCase's patching. This results in johnny cache's local cache not being cleared between tests, creating all kinds of problems.
I see the johnny.utils.celery_enable_all function, but it is not clear where it should be called from.
I see this same question posted on the (old?) bitbucket repo too in issue 47 but it was never answered.
Any clarification would be much appreciated.
I have the following in my proj/__init__.py
:
from johnny.cache import enable
enable()
I find that if I spawn an async celery
task in ./manage.py shell
, the cache will be correct even without the celery patch:
from johnny.utils import celery_enable_all
celery_enable_all()
Looking at the code, why is unpatch needed? Is it necessary at all?
def prerun_handler(*args, **kwargs):
"""Celery pre-run handler. Enables johnny-cache."""
patch()
def postrun_handler(*args, **kwargs):
"""Celery postrun handler. Unpatches and clears the localstore."""
unpatch()
local.clear()
def celery_enable_all():
"""Enable johnny-cache in all celery tasks, clearing the local-store
after each task."""
from celery.signals import task_prerun, task_postrun, task_failure
task_prerun.connect(prerun_handler)
task_postrun.connect(postrun_handler)
# Also have to cleanup on failure.
task_failure.connect(postrun_handler)
Hi There,
Using pycharm with with a pip source pull will cause issues with the test runner because you have a settings file defined in the top level.
Please remove this file or place it in a test application directory if you neeed it.
Ref: http://youtrack.jetbrains.com/issue/PY-11610
Thanks
@jmoiron it seems that you've published your SECRET_KEY
in settings.py
.
Here are the details:
AttributeError: 'module' object has no attribute 'empty_iter'
Solution:
import sys
try:
if sys.argv[1] != 'syncdb':
raise Exception
except:
from johnny.cache import enable
enable()
Hi there,
I found a bug which showed up as a MemoryError
when using johnny-cache and celery. Specifically I found that get_tables
would recursively add duplicate table names to the tables list by Number of Foreign Keys
ร the number of parameters
ร recursions
. This can be visualized by adding the following print
statement to https://github.com/jmoiron/johnny-cache/blob/johnny-cache-1.4.0/johnny/cache.py#L102
def get_tables(node, tables):
print "Node: {} Len: {} Tables: {}".format(node, len(tables), set(tables))
What this will show is this (For 2 queries) (Note: I trimmed out the Node a bit for clarity):
Node: (AND: u'')) Len: 2 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'____-___')) Len: 4 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'Based On Plans')) Len: 8 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 16 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 32 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 64 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 128 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 256 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 512 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 1024 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'OR')) Len: 2048 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'RR-001 14.4 -1 ')) Len: 4096 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 8192 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 16384 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 32768 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: 472)) Len: 65536 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 131072 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 262144 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 524288 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 1048576 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'Steven ')) Len: 2097152 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 4194304 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 8388608 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'00000000')) Len: 16777216 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 33554432 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 67108864 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'')) Len: 134217728 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: u'Non Solar 14.4')) Len: 268435456 Tables: set([u'remrate_data_building', u'remrate_data_project'])
Node: (AND: 9365)) Len: 536870912 Tables: set([u'remrate_data_building', u'remrate_data_project'])
And for 3 Foreign Keys
Node: (AND: 0)) Len: 3 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Node: (AND: 0)) Len: 6 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Node: (AND: 0)) Len: 9 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Node: (AND: 0)) Len: 12 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Node: (AND: 0)) Len: 15 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Node: (AND: 0)) Len: 18 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Node: (AND: 0)) Len: 21 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Node: (AND: 0)) Len: 24 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Node: (AND: 0)) Len: 27 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Node: (AND: 0)) Len: 30 Tables: set([u'remrate_data_building', u'remrate_data_energystarrequirements', u'remrate_data_simulation'])
Full Errror:
Traceback (most recent call last):
File "/apps/remrate_data/tasks.py", line 780, in pull_remrate_data_non_task
remrate_transfer_obj.transfer()
File "/apps/remrate_data/tasks.py", line 575, in transfer
self.get_model_data(SourceModelObj, DestinationModelObj, option_dict)
File "/apps/remrate_data/tasks.py", line 526, in get_model_data
object, create = target_model_obj.objects.get_or_create(**data)
File "/python2.7/site-packages/django/db/models/manager.py", line 146, in get_or_create
return self.get_query_set().get_or_create(**kwargs)
File "/python2.7/site-packages/django/db/models/query.py", line 484, in get_or_create
return self.get(**lookup), False
File "/python2.7/site-packages/django/db/models/query.py", line 398, in get
num = len(clone)
File "/python2.7/site-packages/django/db/models/query.py", line 106, in __len__
self._result_cache = list(self.iterator())
File "/python2.7/site-packages/django/db/models/query.py", line 317, in iterator
for row in compiler.results_iter():
File "/python2.7/site-packages/django/db/models/sql/compiler.py", line 775, in results_iter
for rows in self.execute_sql(MULTI):
File "/python2.7/site-packages/johnny/cache.py", line 324, in newfun
tables = get_tables_for_query(cls.query)
File "/python2.7/site-packages/johnny/cache.py", line 126, in get_tables_for_query
tables += get_tables(node, tables)
MemoryError
The Fix:
The fix is simple and incidentally you do it a bit further down..
https://github.com/jmoiron/johnny-cache/blob/johnny-cache-1.4.0/johnny/cache.py#L111
return list(set(tables))
I'm putting together a pull request for this. Thanks so much!!
Thanks for this nice app.
I would like to cache function as well with johnny-cache. Ideally, with a possibility to manually invalidate the cache, eg. on post_save model signal.
Is that possible / planed ?
I'm using 1.4. I found it that johnny doesn't cache queryset.exists() if it returns False. So I dig into the code. Looks like johnny sees None value and not cached as equal.
def _monkey_select(self, original):
if val is not None:
signals.qc_hit.send(sender=cls, tables=tables,
query=(sql, params, cls.query.ordering_aliases),
size=len(val), key=key)
return val
signals.qc_miss.send(sender=cls, tables=tables,
query=(sql, params, cls.query.ordering_aliases),
key=key)
val = original(cls, *args, **kwargs)
And query.exists()'s call to execute_sql might return a None value, which will be cached and recognized as cache miss when fetched.
Hi,
Do I need to restart memcache after I perform a database migration using South
?
What happens to stale cache entries that may not reflect the latest changes in the model?
First off, I'm not sure if this is an issue with johnny-cache or gunicorn (or neither), and I've been able to work around it without much difficulty, but nonetheless I thought I'd report it here in case anyone else has the same trouble.
I followed the instructions in the JC docs and because my app makes extensive use of management commands and celery tasks, I needed to ensure that JC was enabled throughout, so I added the enable lines to my project's __init__.py
as directed:
from johnny.cache import enable
enable()
This however caused an error every time my gunicorn workers booted, with the stack trace seeming to indicate JC (or something) was being initialised "too early" in the process:
2013-03-09 06:11:51 [21016] [DEBUG] Exception in worker process:
Traceback (most recent call last):
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in spawn_worker
worker.init_process()
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 99, in init_process
self.wsgi = self.app.wsgi()
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 101, in wsgi
self.callable = self.load()
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/gunicorn/app/djangoapp.py", line 91, in load
return mod.make_wsgi_application()
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/gunicorn/app/django_wsgi.py", line 34, in make_wsgi_application
if get_validation_errors(s):
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/django/core/management/validation.py", line 28, in get_validation_errors
from django.db import models, connection
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/django/db/__init__.py", line 14, in <module>
if not settings.DATABASES:
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/django/utils/functional.py", line 276, in __getattr__
self._setup()
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/django/conf/__init__.py", line 42, in _setup
self._wrapped = Settings(settings_module)
File "/home/biggleszx/.virtualenvs/myappname/local/lib/python2.7/site-packages/django/conf/__init__.py", line 89, in __init__
raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'myappname.config.live.settings' (Is it on sys.path?): cannot import name connections
Needless to say I checked and nothing in my app referred to this "connections" import, but I don't know if that's something specific to JC or gunicorn, or neither.
I tried moving the enable lines from __init__.py
into my main project settings.py
file and the same thing occurred.
Anyway, I was able to solve the issue by removing the enable lines and placing them within my management commands and celery task files, where everything seems to work just fine.
FYI, the relevant packages in my virtualenv look like this:
Django==1.3.1
gunicorn==0.14.5
johnny-cache==1.4
python-memcached==1.48
I'm using memcached version 1.4.14-0ubuntu1
on Ubuntu 12.10 64-bit. I note that the gunicorn version I'm using is now a little out of date (so's Django, for that matter), so I wonder if that might have something to do with it.
Anyway, hope this helps someone in a similar situation.
I'm seeing some erratic behavior regarding decimal.Decimal objects.
With something like this:
tax_percentage = get_default_tax_percentage_for_date(datetime.date.today())
# tax_percentage is decimal.Decimal("0.24") at this point
return TaxClass.objects.get(tax_percentage = tax_percentage)
the database code (django/db/models/query.py:get()
) gets {'tax_percentage': '"Decimal(\'0.24\')"'}
as kwargs.
Sorry for having no proper test case right now...
The docs explain that this is a tricky part:
http://pythonhosted.org/johnny-cache/queryset_cache.html#transactions
But I am struggling to understand if autocommit causes any issues for Johnny?
Or does it make Johnny's life easier?
Hi,
Using Django 1.4 and in admin I have a model that I can't see the detail view.
MemcachedError at /admin/nims/article/1141/
error 37 from memcached_set: SYSTEM ERROR(Resource temporarily unavailable), host: 127.0.0.1:11211 -> libmemcached/io.cc:358
Request Method: GET
Request URL: http://127.0.0.1:8007/admin/nims/article/1141/
Django Version: 1.4
Exception Type: MemcachedError
Exception Value:
error 37 from memcached_set: SYSTEM ERROR(Resource temporarily unavailable), host: 127.0.0.1:11211 -> libmemcached/io.cc:358
Exception Location: /home/django/.virtualenvs/astrobiology-alpha/local/lib/python2.7/site-packages/django/core/cache/backends/memcached.py in set, line 64
Python Executable: /home/django/.virtualenvs/astrobiology-alpha/bin/python
Python Version: 2.7.1
Most of my admin works, my guess is this article model is quite a big bigger than the others..
Please let me know how to debug this and if I can send you anything else that would be useful.
Thanks,
Shige
query.tables includes aliases, such as "T6"; these alias names are useless in terms of invalidation and also cause a lot of generational noise. They are necessarily the same base tables (aliases are only generated if required because a query refers to the same table more than once).
Is it posible to return the cache keys for a given queryset? I want to reuse johnny-cache's keys to cache the entire response in memcache.
Thanks
Hi, is still alive the project?
OK this is not really an issue but a question.
Doing maintenance on a django 1.3.1 app. We took over from a different company.
In the settings.py file, I just discovered the following code:
# Models based on PostgreSQL views don't get their caches invalidated when
# their source tables are updated.
# FIXME: Disable caching of PostgreSQL views until we can figure out how to invalidate the view when its source tables are invalidated.
MAN_IN_BLACKLIST=['overview']
The app had johnny-cache 0.3.3 configured. I updated to the latest johnny-cache (1.4).
Does the above still apply??? Can I remove the overview view from the blacklist? Thing is, loading that view is reaaaaaalllyyyyy slow, so I think I know why now...
As soon as I commented out the blacklisted view, the application was flying...I really would love to leave it like this, but I need to understand if there are still any implications with invalidating cache on views.
In trunk (commit 0f16eb1)
there is a settings.py file. I imagine it's here due to a accidental commit.
If it is intentional, then it can't import "proj.urls" which is unlikely to exist for anyone else.
I know there is / was still a problem with django>=1.6 and the current atomic transactions django uses.
Django uses the @transaction.atomic
around all of the admin views (https://github.com/django/django/blob/stable/1.6.x/django/contrib/admin/options.py#L1099). This causes a problem, because when johnny tries to invalidate a table, it checks if the current transaction is managed, and handles invalidation differently depending on if it is managed or not.
I was having the problem where I would add something in the admin panel, and I wasn't seeing the new object in the change view.
Sorry if this is a little sparse, writing this quickly so I don't lose my train of thought.
The latest version has a critical problem that stops it working from Django 1.6: 'Query' object has no attribute 'ordering_aliases'
which was fixed recently. I do not think we really want to have lines like git+git://github.com/jmoiron/johnny-cache@7c8d3ab7aa2f2100c97b4c5bc5ef9120d2e72e9c#egg=johnny-cache
in requirements.txt and similar.
I get the following error when trying to deserialize django data and only with cache enabled.
(Johnny cache 1.4, Django 1.4.18)
Traceback (most recent call last):
File "/opt/tangram/tangramdt/tangram_base/src/tangram/apps/api/deploy.py", line 285, in import_process_def
transaction.commit()
File "/opt/tangram/tangramdt/buildout/eggs/johnny_cache-1.4-py2.7.egg/johnny/transaction.py", line 145, in newfun
self._flush(commit=commit, using=using)
File "/opt/tangram/tangramdt/buildout/eggs/johnny_cache-1.4-py2.7.egg/johnny/transaction.py", line 133, in _flush
self.cache_backend.set(key, value, self.timeout)
File "/opt/tangram/tangramdt/buildout/eggs/Django-1.4.18-py2.7.egg/django/core/cache/backends/memcached.py", line 64, in set
self._cache.set(key, value, self._get_memcache_timeout(timeout))
File "/opt/tangram/tangramdt/buildout/eggs/python_memcached-1.48-py2.7.egg/memcache.py", line 565, in set
return self._set("set", key, val, time, min_compress_len)
File "/opt/tangram/tangramdt/buildout/eggs/python_memcached-1.48-py2.7.egg/memcache.py", line 802, in _set
return _unsafe_set()
File "/opt/tangram/tangramdt/buildout/eggs/python_memcached-1.48-py2.7.egg/memcache.py", line 780, in _unsafe_set
store_info = self._val_to_store_info(val, min_compress_len)
File "/opt/tangram/tangramdt/buildout/eggs/python_memcached-1.48-py2.7.egg/memcache.py", line 751, in _val_to_store_info
pickler.dump(val)
DatabaseError: ORA-22922: nonexistent LOB value
As a workaround I surround it with a cache.disable() / cache.enable()
Hi,
I'm relatively new to Johnny Cache and memcached so maybe it's a simple problem, but I have a report app that is giving me problems. For 2011, for example, I have 14 teams on the landing page. Clicking on most work fine, but one always has this error:
MemcachedError at /nai/reports/annual-reports/2011/gsfc/
error 37 from memcached_set: SYSTEM ERROR(Resource temporarily unavailable), host: 127.0.0.1:11211 -> libmemcached/io.cc:358
I'm wondering if it's something to do with my version of memcached on my server.
I'm running Ubuntu. Can you tell me what the best version of memcached to use?
Also, I'm using pylibmc==1.2.3
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.