python-attrs / attrs Goto Github PK
View Code? Open in Web Editor NEWPython Classes Without Boilerplate
Home Page: https://www.attrs.org/
License: MIT License
Python Classes Without Boilerplate
Home Page: https://www.attrs.org/
License: MIT License
Please explain what default=NOTHING
means, specifically that it will require the caller to specify a value(?)
If the value an instance of
Factory
, it callable will be use to construct a new value.
"it callable" feels like a typo. Should probably be "its callabe".
$ tox -e py26
GLOB sdist-make: /Users/glyph/Projects/attrs/setup.py
py26 inst-nodeps: /Users/glyph/Projects/attrs/.tox/dist/attrs-15.1.0.dev0.zip
py26 installed: -f file:///Users/glyph/.wheelhouses/CPython-2.7.6,argparse==1.3.0,attrs==15.1.0.dev0,cov-core==1.15.0,coverage==3.7.1,py==1.4.30,pytest==2.7.2,pytest-cov==1.8.1,wheel==0.24.0
py26 runtests: PYTHONHASHSEED='3687876705'
py26 runtests: commands[0] | python setup.py test -a --cov attr --cov-report term-missing
running test
Searching for zope.interface
Best match: zope.interface 4.1.2
Processing zope.interface-4.1.2-py2.6-macosx-10.10-intel.egg
Using /Users/glyph/Projects/attrs/.eggs/zope.interface-4.1.2-py2.6-macosx-10.10-intel.egg
running egg_info
writing dependency_links to attrs.egg-info/dependency_links.txt
writing top-level names to attrs.egg-info/top_level.txt
writing attrs.egg-info/PKG-INFO
reading manifest file 'attrs.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.txt'
no previously-included directories found matching 'docs/_build'
writing manifest file 'attrs.egg-info/SOURCES.txt'
running build_ext
============================= test session starts ==============================
platform darwin -- Python 2.6.9 -- py-1.4.30 -- pytest-2.7.2
rootdir: /Users/glyph/Projects/attrs, inifile:
plugins: cov
collected 110 items
tests/test_config.py ....
tests/test_dark_magic.py ......
tests/test_dunders.py .................F.........
tests/test_filters.py .................
tests/test_funcs.py .............
tests/test_make.py ....................s...............
tests/test_validators.py .......
=================================== FAILURES ===================================
____________________________ TestAddInit.test_init _____________________________
self = <tests.test_dunders.TestAddInit object at 0x10ebd1750>
def test_init(self):
"""
If `init` is False, ignore that attribute.
"""
C = make_class("C", {"a": attr(init=False), "b": attr()})
with pytest.raises(TypeError) as e:
C(a=1, b=2)
msg = e.value if PY26 else e.value.args[0]
> assert "__init__() got an unexpected keyword argument 'a'" == msg
E assert "__init__() got an unexpected keyword argument 'a'" == TypeError("__init__() got an unexpected keyword argument 'a'",)
tests/test_dunders.py:210: AssertionError
--------------- coverage: platform darwin, python 2.6.9-final-0 ----------------
Name Stmts Miss Branch BrMiss Cover Missing
-------------------------------------------------------------
attr/__init__ 17 0 0 0 100%
attr/_config 9 0 2 0 100%
attr/_funcs 35 0 18 0 100%
attr/_make 200 0 88 0 100%
attr/filters 15 0 3 0 100%
attr/validators 22 0 8 0 100%
-------------------------------------------------------------
TOTAL 298 0 119 0 100%
=============== 1 failed, 108 passed, 1 skipped in 0.29 seconds ================
ERROR: InvocationError: '/Users/glyph/Projects/attrs/.tox/py26/bin/python setup.py test -a --cov attr --cov-report term-missing'
___________________________________ summary ____________________________________
ERROR: py26: commands failed
$
Hello,
For the usage of attr with DB API 2.0 libraries (like Sqlite3), it would be great to have a astuple
method.
This method would return the object values in a tuple, ordered by the order of declaration of it's attributes.
For example we could do things like this :
import sqlite3
import attr
@attr.s()
class Foo:
a = attr.ib()
b = attr.ib()
foo = Foo(2, 3)
con = sqlite3.connect(":memory:")
with con:
con.execute("INSERT INTO foo VALUES (?, ?)", attr.astuple(foo))
This would be useful for DB API libraries but also for other needs probably.
What is your opinion about it ?
The convention of putting <> around a repr is to make it syntactically invalid so it won't work with eval
. But attr
's reprs will in fact work with eval, so IMHO you should just leave them out.
When calling attr.asdict()
, it converts attributes of type tuple
, set
to type list
and subclasses of dict
to dict
. This is being done for JSON compatibility, however, there are use cases where it's required to preserve the original type.
Therefore I request an optional argument called retain_collection_types
, which is set to False
by default (to not break the existing API). If enabled, the function should not convert any collection type.
from attr import attr, attributes
@attributes(these = {
'a': attr()
})
class Test(object):
def __init__(self, *args, **kwargs):
self._a = None
super(Test, self).__init__(*args, **kwargs)
def set_a(self, val):
if val is not None:
val.add(self)
self._a = val
def get_a(self):
return self._a
a = property(get_a, set_a)
>>> s=set()
>>> t=Test(s)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<attrs generated init 598ab62ad1f7da66a4de0a0e4c8e06f757331088>", line 3, in __init__
File "test_attr.py", line 14, in set_a
val.add(self)
File "c:\python27\lib\site-packages\attr\_make.py", line 215, in hash_
return hash(_attrs_to_tuple(self, attrs))
File "c:\python27\lib\site-packages\attr\_make.py", line 201, in _attrs_to_tuple
return tuple(getattr(obj, a.name) for a in attrs)
File "c:\python27\lib\site-packages\attr\_make.py", line 201, in <genexpr>
return tuple(getattr(obj, a.name) for a in attrs)
File "test_attr.py", line 18, in get_a
return self._a
AttributeError: 'Test' object has no attribute '_a'
failing example
import attr
def test_example():
@attr.s
class Base(object):
attr = attr.ib(default=False)
@attr.s
class Sub(Base):
needed = attr.ib()
Example:
def require_foo(attribute, value):
if not value:
raise ValueError("foo is required!")
@attr.s
class Bar(object):
foo = attr.ib(validator=require_foo)
baz = attr.ib()
b = Bar(foo="foo", baz="baz")
Expected behavior:
>>> attr.validate(b)
>>>
Actual behavior:
>>> attr.validate(b)
/Users/lynn/.virtualenvs/temp-raml/lib/python2.7/site-packages/attr/_funcs.pyc in validate(inst)
125 """
126 for a in fields(inst.__class__):
--> 127 a.validator(a, getattr(inst, a.name))
TypeError: 'NoneType' object is not callable
>>>
...while keeping the attr.ib validators.
I'd like to be able to create a class:
>>> @attr.s
>>> class Obj:
>>> a = attr.ib(validator=attr.validators.instance_of(str))
Instantiate it:
>>> obj = Obj('1')
And then update instantiated attributes but have these updates subject to existing validators:
>>> obj.a = 1
TypeError: 'a' must be <class 'str'>
Maybe this could be achieved by means of a helper function? Something like:
>>> attr.update(obj, a=1)
Is this kind of behavior something that people are interested in?
Code that I would expect to run without errors:
import attr
@attr.s(frozen=True)
class Ice:
x = attr.ib(convert=lambda y: y + 1)
assert Ice(1).x == 2
Actual runtime result:
Traceback (most recent call last):
File "asdf.py", line 7, in <module>
assert Ice(1).x == 2
File "<attrs generated init cd74cad05a4efad0b01528e489d51126583d23cb>", line 4, in __init__
File "/usr/lib/python3.5/site-packages/attr/_make.py", line 483, in _convert
setattr(inst, a.name, a.convert(getattr(inst, a.name)))
File "/usr/lib/python3.5/site-packages/attr/_make.py", line 156, in _frozen_setattrs
raise FrozenInstanceError()
attr.exceptions.FrozenInstanceError
Clearly _convert
isn't aware of the class's frozenness. I took a quick peek at the code and it seems like it should perhaps be using _cached_setattr
instead of the ordinary setattr
?
I KNOW YOU DON'T LIKE INHERITANCE, but...
Current behavior:
In [2]: @attr.s
...: class Foo(object):
...: x = attr.ib()
...: y = attr.ib(default='yyy')
...:
In [3]: @attr.s
...: class Bar(Foo):
...: z = attr.ib()
...:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-35fd7c629865> in <module>()
1 @attr.s
----> 2 class Bar(Foo):
3 z = attr.ib()
4
/Users/lynn/.virtualenvs/temp-raml/lib/python2.7/site-packages/attr/_make.pyc in attributes(maybe_cl, these, repr_ns, repr, cmp, hash, init)
191 return wrap
192 else:
--> 193 return wrap(maybe_cl)
194
195
/Users/lynn/.virtualenvs/temp-raml/lib/python2.7/site-packages/attr/_make.pyc in wrap(cl)
175 if getattr(cl, "__class__", None) is None:
176 raise TypeError("attrs only works with new-style classes.")
--> 177 _transform_attrs(cl, these)
178 if repr is True:
179 cl = _add_repr(cl, ns=repr_ns)
/Users/lynn/.virtualenvs/temp-raml/lib/python2.7/site-packages/attr/_make.pyc in _transform_attrs(cl, these)
128 "No mandatory attributes allowed after an attribute with a "
129 "default value or factory. Attribute in question: {a!r}"
--> 130 .format(a=a)
131 )
132 elif had_default is False and a.default is not NOTHING:
ValueError: No mandatory attributes allowed after an attribute with a default value or factory. Attribute in question: Attribute(name='z', default=NOTHING, validator=None, repr=True, cmp=True, hash=True, init=True)
Expected behavior - no error returned - to be able to put mandatory attributes after attr.ib()'s with default or factories set, at least for inheritance.
I have a feeling you did this on purpose - or else why would you have such an error. So perhaps you can explain to me the benefit of having attr.ibs with defaults/factories be the last of the attr.ibs to be defined?
I'd personally be able to get rid of a lot of boilerplate (either at call sites or as __init__
methods) from my classes if attrs allowed me to supply a conversion callable with my attr.ib
s.
e.g.
from pyrsistent import pmap
@attr.s
class Foo(object):
mapping = attr.ib(coerce=pmap)
The coerce
argument (naming to be bikeshedded) would be called with the value passed in after validation, and the return value would be used as the actual attribute value.
Would such a feature be accepted as a patch?
A non-attrs example:
class Foo(object):
def __init__(self, x):
self.x = x
self.y = 'bla'
class Bar(Foo):
def __init__(self, x):
super(Bar, self).__init__(x)
self.z = self.foo() # this is what I want to do with attrs
def foo(self):
return self.y
>>> b = Bar(1)
>>> b.foo()
'bla'
>>> b.z
'bla'
Or ideally with attrs:
@attr.s
class Foo(object):
x = attr.ib()
y = attr.ib(default='bla')
@attr.s
class Bar(Foo):
z = attr.ib(default=attr.Factory(foo, pass_self=True)
def foo(self):
return self.y
or something like that.
DANKE.
Given that attrs
doesn't allow composing with a custom __init__
, there is no way to initialize mutable implementation details.
For example,
class Pool(object):
def __init__(self, pool_size):
self.pool_size = pool_size
self._pool = []
...
can't be expressed with attrs.
This is actually not unexpected, but I hoped it would work:
>>> import attr
>>> @attr.s
... class A:
... x = attr.ib()
... _y = attr.ib()
...
>>> a = A(1, 2)
>>> attr.asdict(a)
{'_y': 2, 'x': 1}
>>> A(**_)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() got an unexpected keyword argument '_y'
Maybe a function attr.fromdict(cls, dct)
would solve this? It would remove the leading _
from all dict keys and then create the instance.
Since you exlicitly talk about serialization to JSON in the docs, there should IMHO also be an easy way for deserialization.
it seems to me attrs.assoc()
is broken when using slots=True
:
import attr
@attr.s(slots=True)
class Foo:
bar = attr.ib()
baz = attr.ib()
a = Foo(1, 2)
b = attr.assoc(a, baz=3)
print(b)
results in
Traceback (most recent call last):
...
ValueError: baz is not an attrs attribute on <class '__main__.Foo'>.
when i also add frozen=True
, like this:
@attr.s(slots=True, frozen=True)
class Foo:
...
i get this:
Traceback (most recent call last):
...
attr.exceptions.FrozenInstanceError
I was going to write some composition validators, like all
(all the given validators have to match), but the more I thought about it the more I basically want all the compositional elements in http://testtools.readthedocs.org/en/latest/for-test-authors.html#matchers.
I was wondering how you felt about adding a validator that took said matchers? It would make it really easy for users to do custom validation, and testtool.matchers
has pretty nice error reporting.
"a" doesn't really make sense to me as a word :)
attr.filters.include
/attr.filters.exclude
don't work with attributes as expected from the docs on classes where slots=True
.
import attr
@attr.s(slots=True)
class Foo(object):
bar = attr.ib()
baz = attr.ib()
f = Foo(bar=1, baz=2)
assert attr.asdict(f, filter=attr.filters.exclude(Foo.bar)) == {'baz': 2}
the repr()
generated by attrs
looks something like this by default:
MyClass(a=1, b=2)
this is fine in simple cases. however, when using init=False
, and implementing my own __init__()
, this repr()
is misleading since it looks like code that can be evaluated, while actually it is not (and if it is, it's not an intended usage pattern anyway). what about having something like this instead for those cases:
<MyClass a=1 b=2>
...which is still very useful (e.g. for debugging), and it resembles a bit what python produces for classes that do not implement a custom __repr__()
:
>>> x = object()
>>> x
<object object at 0x7f6cda0c45e0>
obviously, this custom repr()
behaviour would require explicit enabling. a flag to attr.s()
, e.g. attr.s(use_eval_repr=False)
(please come up with a better name), would make sense to me.
what do you think?
useful e.g. with structlog
x = attr.ib(default=Factory(logger.new, foo="bar"))
Objects, as in object-oriented programming, are side-effecty, mutable state, whose methods ought to represent I/O.
Values, as in functional programming, are immutable data, whose methods ought to represent computation.
Python does not have as good a division between these very different sorts of beasts as it should, but it does have one critical distinction: the __hash__
method.
characteristic
has this issue which crops up occasionally where you end up with objects that you can't put into a dictionary as a key, because one of its attributes is a dictionary. Sometimes people expect to be able to do this because characteristic
makes so many other things easy, and just expect dictionaries to suddenly be immutable; sometimes people expect hash-by-identity.
I propose that attr provide a way to specifically create two types of objects with a distinct interface; one that creates a "value" and one that creates an "object", so that users can see issues around mutability far earlier in the process.
So, for example, the "object" type would:
__hash__
by default, provide an __eq__
that does structural equality, and __gt__
/__lt__
that just raise exceptions__hash__
, but then also switch to an identity-based __eq__
and the 'value" type would
hash()
on all of its arguments at construction time so it would fail immediately if it contained a mutable typeThe Slots section mentions "instance variables". This is not a Python term. Consider using "data attribute" (although that may confuse with attrs
' Attribute
?)
The double-hyphen (--
) in the beginning of the document doesn't turn into a dash in ReST. Replace it with a literal em-dash: —.
The list in Validators has odd capitalization/punctuation:
#. The *instance* that's being validated. #. The *attribute* that it's validating #. and finally the *value* that is passed for it.
I think this is better punctuation:
#. the *instance* that's being validated,
#. the *attribute* that it's validating, and finally
#. the *value* that is passed for it.
In the example about Defaults, the comment here confused me for a moment:
def connect(cls, db_string): # connect somehow to db_string return cls(socket=42)
I read it as describing the line below it, rather than being a placeholder. Maybe this would be more intuitive:
# ... connect somehow to db_string ...
The Examples page is great, but I'm missing direct links to the API docs, in case I want to learn more about specific features. I found myself ctrl-F-ing the API documentation manually. Maybe such links would clutter the page — not sure I have a good solution.
I believe that this can avoid having to import attr at other places. Naturally, people could just simply overwrite this themselves, but it would be nice as an attribute of an attr class.
currently its not possible to declare an attribute that's something like a list of Elements
or a mapping of string to integer
Not sure that's really fixeable, but it took me a while to detect this bug in my code:
from attr import attributes, attr
@attributes
class Foo(object):
a = attr()
@property
def a(self):
return "yolo"
Foo("value")
raises
Traceback (most recent call last):
File "failure.py", line 13, in <module>
Foo("value")
TypeError: __init__() takes exactly 1 argument (2 given)
Should contain stuff like https://twitter.com/cournape/status/769120659783950336:
sigh. attr really helps following a golden rule: if your dict has a fixed and known set of keys, it is an object, not a hash
https://news.ycombinator.com/item?id=12363487
Dictionaries are not for fixed fields.
if you have a dict, it maps something to something else. You should be able to add and remove values.
Objects, on the other hand, are supposed to have specific fields of specific types, because their methods have strong expectations of what those fields and types are.
attrs lets you be specific about those expectations; a dictionary does not. It gives you a named entity (the class) in your code, which lets you explain in other places whether you take a parameter of that class or return a value of that class.
See "Edit on Github" in TR corner, and Github "View" link in version menu in BL corner.
Using Apache Spark, with attr 16.1.0, I get this error.
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File ".../spark/python/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File ".../spark/python/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File ".../spark/python/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
File ".../lib/python2.7/site-packages/attr/_make.py", line 617, in __setattr__
raise FrozenInstanceError()
FrozenInstanceError
Unfortunately, I can't reproduce it with simple code, for some reason. Using IPython, I managed to track it down to rebuilding Attributes
class:
/usr/lib64/python2.7/pickle.py(1247)load_build()
1245 if slotstate:
1246 for k, v in slotstate.items():
-> 1247 setattr(inst, k, v)
Spark uses a custom pickler in order to support local function and classes, but still uses the standard unpickler.
This was not an issue with 16.0.0, so sticking to that for now.
Hi,
working on making Attribute
s immutable, and also thinking about immutability in Python in general, to evaluate how hard would it be to actually allow attrs to create immutable classes.
(Side note: adding __slots__
to the Attribute
class is probably a significant win for code bases with a lot of attrs classes, and can be trivially done by renaming the existing field _attributes
to __slots__
. This will be in the pull request too.)
Googling and playing around: the only way of making truly immutable classes in Python, I think, is by subclassing namedtuple. I'm not a huge fan of this approach though. My biggest philosophical objection is the fact that if this is done, isinstance(obj, tuple)
and isinstance(obj, typing.Sequence)
will be true. Instances of namedtuple subclasses are basically sequences/tuples. To me this is iffy.
Another approach is by disabling __setattr__
, preferably combined with __slots__
so the __dict__
can't be modified directly. This isn't bulletproof, and __init__
basically depends on it not being bulletproof to set the initial values. The instances can still be changed by doing object.__setattr__(obj, 'name', val)
. If this is good enough, this approach is relatively straightforward.
What are your thoughts on this subject?
Here's my playground: https://github.com/Tinche/attrs/tree/feature/immutable-attributes
The link on the Github repo description needs updating for the RTD migration, as per their blog post of the 27th April ‘Securing subdomains’:
Starting today, Read the Docs will start hosting projects from subdomains on the domain readthedocs.io, instead of on readthedocs.org. This change addresses some security concerns around site cookies while hosting user generated data on the same domain as our dashboard.
https://attrs.readthedocs.io/en/stable/ is where one now ends up
I was quite excited to see characteristic improved to use descriptors, but I was surprised by following behaviour:
from attr import attributes, attr
@attributes
class A(object):
a = attr()
@attributes
class B(A):
b = attr()
@attributes
class C(B):
c = attr()
raises a syntax error:
Traceback (most recent call last):
File "yo.py", line 15, in <module>
class C(B):
File "/home/davidc/.envs/jaguar-dev/local/lib/python2.7/site-packages/attr/_make.py", line 194, in attributes
return wrap(maybe_cl)
File "/home/davidc/.envs/jaguar-dev/local/lib/python2.7/site-packages/attr/_make.py", line 186, in wrap
cl = _add_init(cl)
File "/home/davidc/.envs/jaguar-dev/local/lib/python2.7/site-packages/attr/_make.py", line 340, in _add_init
bytecode = compile(script, unique_filename, "exec")
File "<attrs generated init 24e3b3b777d1b457ffb869223ee4a4b7665529ed>", line 1
SyntaxError: duplicate argument 'a' in function definition
So it looks like one level of inheritance works (B
works as expected), but not multiple.
I noticed that creating instances of attr objects was expensive:
# python >= 3.3, rough timing
import time
from attr import attributes, attr
from attr.validators import instance_of
@attributes
class Arch(object):
name = attr("name")
@attributes
class ValidatedArch(object):
name = attr("name", validator=instance_of(str))
def build_platform():
return Arch("x86")
def timeit(func):
times = []
for i in range(10000):
start = time.perf_counter()
func()
times.append(time.perf_counter() - start)
return "Min cost across 10000 iterations: {:.2f} us".format(min(times) * 1e6)
print(timeit(lambda: Arch("x86")))
print(timeit(lambda: ValidatedArch("x86")))
I see a factor of ~ 50x (0.8us vs 34 us) on my machine. A quick profiling shows that almost all the difference comes from the deepcopy in the attr._make.fields
function.
For non trivial classes with a few attributes, the construction cost quickly reaches the hundreds of us.
Sometimes I want to write an attr.s
class that has some behavior in its constructor. Particularly:
open
something, for example) (sorry)Arguably, many of these things could be expressed as a function, but some of them involve preserving valid structural relationships between the object and its dependencies, so I'd really prefer to be able to use __init__
in some cases.
I know, I can write my own __init__
. I know! But if I do that, I give up not just on the convenience of attr.s
writing the __init__
for me, but also the useful behavior of attr.s
: i.e. validators, default factories.
All of my use-cases here would be easily accomplished with either a pre-__init__
and post-__init__
hook. Most of the ones I'm actually interested in would all be achievable with a post-__init__
hook; I think pre-__init__
might be pointless given the availability of convert=
.
This has been discussed already, but some of the other issues (#33 #24 #38 #38 #58) which have raised the possibility of a post-__init__
hook have been somewhat conflated with better support for inheritance. As we all know, inheritance is deprecated and will be removed in python 3.7 so we don't need to focus on that.
I don't know if this would even be technically possible, but it'd still be nice to have. I'm personally not that interested in the efficiency gains (although it'd be nice), but I'd like for exceptions to pop up when non-existent attributes are assigned to attrs instances (just got bitten by this making one of my tests fail).
Current behavior:
In [8]: @attr.s(repr=False)
...: class Foo(object):
...: x = attr.ib()
...: y = attr.ib(repr=True)
...:
In [9]: f = Foo(x="foo", y="bar")
In [10]: f
Out[10]: <__main__.Foo at 0x10a139c50>
Desired behavior:
In [8]: @attr.s(repr=False)
...: class Foo(object):
...: x = attr.ib()
...: y = attr.ib(repr=True)
...:
In [9]: f = Foo(x="foo", y="bar")
In [10]: f
Out[10]: Foo(y="bar")
Especially useful when you have many attr.ibs but only want one in to actually be in the repr.
A reminiscence of #28:
# python >= 3.3, rough timing
import time
from attr import attributes, attr
from attr.validators import instance_of
def decode_if_needed(v):
if isinstance(v, bytes):
return v.decode()
@attributes
class Arch(object):
name = attr("name")
@attributes
class ConvertedArch(object):
name = attr("name", validator=instance_of(str), convert=decode_if_needed)
def timeit(func):
times = []
for i in range(10000):
start = time.perf_counter()
func()
times.append(time.perf_counter() - start)
return "Min cost across 10000 iterations: {:.2f} us".format(min(times) * 1e6)
print(timeit(lambda: Arch("x86")))
print(timeit(lambda: ConvertedArch(b"x86")))
Min cost across 10000 iterations: 1.40 us
Min cost across 10000 iterations: 43.72 us
Now, the fix may not be as trivial as #28, because I can see how it may be part of the API that the original attribute is not touched when using convert
, and not using fields
to iterate in https://github.com/hynek/attrs/blob/master/src/attr/_make.py#L447 would break that promise. As I have not seen any test related to this, maybe that promise could be relaxed ?
A seemingly harmless example:
@attr.s
class Foo(object):
a = attr.ib()
@attr.s
class Bar(Foo):
pass
@attr.s
class Baz(Bar):
pass
fails with
File "<attrs generated init 5250a51181cb781b2576810943a33169c4fea664>", line 1
SyntaxError: duplicate argument 'a' in function definition
Looks like attrs
tries to include a
twice in parameter list of Baz#__init__
?
(possibly related: #24)
Please consider avoiding double negatives in arguments... pretty please? no_xyz=False
makes my head asplode. xyz=True
is a lot more clear.
I'm packaging attrs for Fedora (building on F24), and hitting failures of TestAsDict::test_roundtrip and ::test_asdict_preserve_order, with both Python 2.7.12 and 3.5.1. It seems entirely likely that I'm doing something wrong, and I could use a bit of help diagnosing the cause.
The reported error for both seems to be a SyntaxError in generated code:
> bytecode = compile(script, unique_filename, "exec") E File "<attrs generated init 07eac0381b3f6f70c78894820f8cab045faef777>", line 1 E def __init__(self, aa=attr_dict['aa'].default, ac=attr_dict['ac'].default, ab=attr_dict['ab'].default, ae=attr_dict['ae'].default, ad=attr_dict['ad'].default, ag=attr_dict['ag'].default, af=attr_dict['af'].default, ai=attr_dict['ai'].default, ah=attr_dict['ah'].default, ak=attr_dict['ak'].default, aj=attr_dict['aj'].default, am=attr_dict['am'].default, al=attr_dict['al'].default, ao=attr_dict['ao'].default, an=attr_dict['an'].default, aq=attr_dict['aq'].default, ap=attr_dict['ap'].default, as=attr_dict['as'].default, ar=attr_dict['ar'].default, a=attr_dict['a'].default, c=attr_dict['c'].default, b=attr_dict['b'].default, e=attr_dict['e'].default, d=attr_dict['d'].default, g=attr_dict['g'].default, f=attr_dict['f'].default, i=attr_dict['i'].default, h=attr_dict['h'].default, k=attr_dict['k'].default, j=attr_dict['j'].default, m=attr_dict['m'].default, l=attr_dict['l'].default, o=attr_dict['o'].default, n=attr_dict['n'].default, q=attr_dict['q'].default, p=attr_dict['p'].default, s=attr_dict['s'].default, r=attr_dict['r'].default, u=attr_dict['u'].default, t=attr_dict['t'].default, w=attr_dict['w'].default, v=attr_dict['v'].default, y=attr_dict['y'].default, x=attr_dict['x'].default, z=attr_dict['z'].default): E ^ E SyntaxError: invalid syntax ../../BUILDROOT/python-attrs-16.0.0-5.fc24.noarch/usr/lib/python2.7/site-packages/attr/_make.py:381: SyntaxError
A complete log is attached:
f24_attrs_test.txt
Any suggestions will be most appreciated.
The docs mentions "a bunch of validators", but they ar not there yet, right? Am I correct that there is no way of supplying multiple validators? Of course one could write custom validators, but it seems more useful to be able to compose validators.
I'm not sure what's going on here, but the docs seem to suggest that it's a bug:
>>> import attr
>>> @attr.s
... class Foo(object):
... bar = attr.ib(init=False, default=42)
...
>>> Foo().bar # Should be 42
_CountingAttr(counter=6, default=42, repr=True, cmp=True, hash=True, init=False)
@hynek
This is not a bug report or an issue, but rather a question of whether support for more intelligent / composable validators would be accepted?
In the latest project where I have a few dozen different classes in the schema, I made a bunch of wrappers around validators (e.g. so you don't have to type attr.validators.instance_of
every time, and a few other simplifications), I'll provide a few lines of sample code below:
fields_start = typed(non_negative(int), default=0)
checks = typed(list_of(Check), default=attr.Factory(list))
values = typed(non_empty(dict_of(non_empty(str), int)))
Here typed
is a very lightweight wrapper around attr.ib
. All of the validators here are fully composable, plus any type-like objects found where a validator is expected are automatically converted to instance_of
(why else would you pass a type object to validator
argument?). Would you be interested in extending validators in this way? If so, I could try and pack some of this up into a PR 😸
There are some circumstances where I would like to add comparison, or hashability, without wanting to decorate my class with attrs.
_add_cmp
and friends are very close to be generally useful. If they were made public, and the _attrs_to_tuple helper took an iterable of strings rather than Attribute
objects, they'd be fully generic.
Would you consider a patch that allowed this?
attrs-decorated classes can produce unexpected behaviour when they exist in an inheritance tree with non-attrs-decorated classes that define their own __init__
methods, because those __init__
methods aren't called. This dumb test exercises this:
def test_init_with_multiple_inheritance():
import attr
@attr.s
class A(object):
a = attr.ib(default=42)
class B(object):
def __init__(self):
super(B, self).__init__()
self.b = 3
class C(A, B):
pass
c = C()
assert c.a == 42
assert c.b == 3
The second assertion currently fails with an AttributeError
.
In ordinary Python code this would be addressed with a super
call in A.__init__
, but I'm not sure if adding a super to the __init__
returned from _attrs_to_script
would achieve the same result without breaking a bunch of other cases.
(I apologise for not digging further & submitting a patch; the code in _make.py
is a little too terrifying to get stuck into a Monday morning!)
Current behavior:
In [17]: @attr.s()
....: class Bar(object):
....: x = attr.ib()
....:
In [18]: b = Bar(x="foo")
In [19]: b.x
Out[19]: 'foo'
In [20]: help(b.x)
no Python documentation found for 'foo'
Desired behavior:
In [17]: @attr.s()
....: class Bar(object):
....: x = attr.ib(help="The x of bar") # or doc="The x of bar" or whatever
....:
In [18]: b = Bar(x="foo")
In [19]: b.x
Out[19]: 'foo'
In [20]: help(b.x)
Help on attribute:
The x of bar
(or something like that)
Similar to doing:
n [21]: class Foo(object):
....: def __init__(self):
....: self._x = None
....: @property
....: def x(self):
....: """The x of Foo"""
....: return self._x
....:
In [22]: f = Foo
In [23]: f.x
Out[23]: <property at 0x10a097940>
In [24]: help(f.x)
Help on property:
The x of Foo
A feature idea: attr.asdict currently turns classed into ordinary Python dicts. It'd be useful if a dict factory could be passed in (defauting to just dict
). The main use case would be to pass in OrderedDict.
I have a situation at work that this would be useful for, but we're approaching the problem from a different angle so we probably won't need it. Just thought it might be a useful feature and simple to implement. I could put together a pull request.
Is there any specific reason that @attr.s
doesn't add at least __getitem__
and __setitem__
?
attr.asdict()
is nice, but direct attribute access by key would be even more convenient.
For example, instead of:
backend = get_backend(config)
while not backend.check_config():
click.echo('Invalid Configuration parameters!')
for param_name, question in backend.config.check_config_requires:
value = click.prompt(question, default=getattr(backend.config, param_name))
setattr(backend.config, param_name, value)
would be much nicer to write:
backend = get_backend(config)
while not backend.check_config():
click.echo('Invalid Configuration parameters!')
for param_name, question in backend.config.check_config_requires:
value = click.prompt(question, default=backend.config[param_name])
backend.config[param_name] = value
It thought this syntax would be nice (same as attr.s(repr=True)
):
@attr.s(dict=True)
class VaultConfig:
name = 'Vault'
url = attr.ib(default='http://localhost:8200')
It might be desirable to allow user-supplied metadata dict as part of an Attribute for use later during introspection. Something like:
@attr.s
class A:
boring = attr.ib()
i_am_special = attr.ib(metadata = {'special': True})
def do_things(self):
for a in attr.fields(type(self)):
if a.metadata.get('special', False):
self.do_something(a.name)
else:
self.do_something_else(b.name)
(above example is assuming attrs populates metadata with an empty dict if not provided.
Perhaps this is encourages more metaprogramming than is healthy, but I had a need for this in my first use of attrs; the alternative was keeping an explicit list of special attribute names, which is not very DRY. Just looking for design/desirability feedback before working on a patch.
On some browser-python runtimes, eval()
is either catastrophically slow (brython, skulpt) or completely unavailable (pyj(ama)s, batavia). Since attrs is The Library You Should Absolutely Use No Matter What™, it would be nice if it had an affordance to support those runtimes.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.