Coder Social home page Coder Social logo

linkml / linkml-runtime Goto Github PK

View Code? Open in Web Editor NEW
24.0 6.0 22.0 3.06 MB

Runtime support for linkml generated models

Home Page: https://linkml.io/linkml/

License: Creative Commons Zero v1.0 Universal

Python 98.96% Shell 0.05% Jupyter Notebook 0.95% Makefile 0.04%
linkml-runtime linkml python object-model instances conversion hacktoberfest

linkml-runtime's Introduction

linkml-runtime

Pyversions badge PyPi PyPIDownloadsTotal PyPIDownloadsMonth codecov

Runtime support for linkml generated data classes

About

This python library provides runtime support for LinkML datamodels.

See the LinkML repo for the Python Dataclass Generator which will convert a schema into a Python object model. That model will have dependencies on functionality in this library.

The library also provides

  • loaders: for loading from external formats such as json, yaml, rdf, tsv into LinkML instances
  • dumpers: the reverse operation

See working with data in the documentation for more details

This repository also contains the Python dataclass representation of the LinkML metamodel, and various utility functions that are useful for working with LinkML data and schemas.

It also includes the SchemaView class for working with LinkML schemas

Notebooks

See the notebooks folder for examples

linkml-runtime's People

Contributors

actions-user avatar amc-corey-cox avatar caufieldjh avatar cmungall avatar cthoyt avatar dalito avatar deepakunni3 avatar gaurav avatar hrshdhgd avatar hsolbrig avatar joeflack4 avatar joshmoore avatar kevinschaper avatar lkuchenb avatar nlharris avatar pkalita-lbl avatar richardbruskiewich avatar sierra-moxon avatar silvanoc avatar sneakers-the-rat avatar sujaypatil96 avatar timothy-trinidad-ps avatar turbomam avatar victoriasoesanto avatar wdduncan avatar wolfgangfahl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

linkml-runtime's Issues

compile_python uses the wrong root directory

compile_python.py has the following lines:

    if package_path:
        # We have to calculate the path to expected path relative to the current working directory
        path_from_tests_parent = os.path.relpath(package_path, os.path.join(os.getcwd(), '..'))
        module.__package__ = os.path.dirname(os.path.relpath(path_from_tests_parent, os.getcwd())).replace('/', '.')

These lines exist to establish a root path for imports, but os.getcwd() gets the working directory from where the python interpreter was invoked, and this isn't always the base path. You can replicate this issue by running tox from the base directory, and note that module.__package__ = linkml-runtime.tests.test_issues.input instead of tests.test_issues.input

Post poetry integration steps

Once PR #80 has been merged into the repo do the following:

  • Remove pipenv specific files: Pipfile, requirements.txt and requirements-dev.txt too since they've been generated
  • Update Github Actions workflow to use poetry specific build steps rather than the current pipenv build steps
  • Update Makefile to use poetry build steps

add a release instructions guide

A recipe for a release would be awesome.
eg:

  1. update pyproject.toml to bump the release version
  2. update CHANGELOG
  3. update Authors
  4. create a release from the GitHub interface.
  5. verify actions run...

etc...

TypedNode needs to handle plain strings if it gets called with them

TypedNode.loc() function shouldn't assume that its argument is always a TypedNode instance:

The current code requires that functions check that a string is an instance of TypedNode before invoking the loc function. This is brittle, and TypedNode should be able to recognize strings that aren't instances of itself.

`SchemaView.materialize_patterns()` does not account for `slot_usage`

If I have a schema that contains the following:

# test.yaml
settings:
  generated_id: "[a-z0-9]{6,}"

classes:
  MyClass:
    slots:
      - id

slots:
  id:
    structured_pattern:
      syntax: "prefix:{generated_id}"

And I call the materialize_patterns method on a SchemaView instance for that schema, I see that the pattern meta-slot is populated as expected:

>>> from linkml_runtime import SchemaView
>>> view = SchemaView('test.yaml')
>>> view.schema.slots['id'].pattern is None
True
>>> view.materialize_patterns()
>>> view.schema.slots['id'].pattern
'prefix:[a-z0-9]{6,}'

However if the structured_pattern is instead set on the slot_usage of a class:

# test2.yaml
settings:
  generated_id: "[a-z0-9]{6,}"

classes:
  MyClass:
    slots:
      -id
  FancyIdClass:
    is_a: MyClass
    slot_usage:
      id:
        structured_pattern:
          syntax: "prefix:{generated_id}"

slots:
  id:

Calling materialize_patterns on the SchemaView instance doesn't seem to have any effect. I suppose I would expect the pattern to be materialized on the slot_usage definition instead in this case.

>>> view = SchemaView('test2.yaml')
>>> view.schema.classes['FancyIdClass'].slot_usage['id'].pattern is None
True
>>> view.materialize_patterns()
>>> # I think I would expect this to not be None at this point
>>> view.schema.classes['FancyIdClass'].slot_usage['id'].pattern is None
True

SchemaView should handle shared attributes with shared names appropriately

classes:
  C:
    attributes:
      a:
         ...
  D:
    attributes:
      a:
          ...

when using methods such as induced_slot that are contextualized by class, the behavior is umabiguous

however, when using the general get_slots method, the behavior can be ambiguous for fetching attribute a. SchemaView should return a slot when attributes=True is passed.

jsonasobj2.as_json_obj turns enums into empty dicts

book = Book(id='B1', genres=['fantasy'])
print(book.genres)
print(type(book.genres[0]))
print(as_json_obj(book.genres[0]))

out:

[(text='fantasy')]
<class 'tests.test_loaders_dumpers.models.books_normalized.GenreEnum'>
{}

This is a problem, because remove_empty_items calls as_json_obj when converting lists

Bug/UX: `extended_str` incompatibility w/ Neo4j

Description

In the CCDH Terminology Service, we're trying to load a YAML model and import it into Neo4J. Previously we never had this issue when importing the CCDH model into Neo4J. This is new.

Related issues

neo4j-contrib/interchange#4
#64
cancerDHC/ccdh-terminology-service#115

Error messages

Short err

TypeError: Values of type <class 'linkml_runtime.utils.yamlutils.extended_str'> are not supported

Long err

Traceback (most recent call last):
 File “/usr/local/lib/python3.8/runpy.py”, line 194, in _run_module_as_main
  return _run_code(code, main_globals, None,
 File “/usr/local/lib/python3.8/runpy.py”, line 87, in _run_code
  exec(code, run_globals)
 File “/app/ccdh/importers/importer.py”, line 221, in <module>
  Importer.import_all()
 File “/app/ccdh/importers/importer.py”, line 215, in import_all
  Importer(neo4j_graph()).import_harmonized_attributes(CrdcHImporter.read_harmonized_attributes())
 File “/app/ccdh/importers/importer.py”, line 65, in import_harmonized_attributes
  self.import_harmonized_attribute(harmonized_attribute)
 File “/app/ccdh/importers/importer.py”, line 105, in import_harmonized_attribute
  tx.create(subgraph)
 File “/usr/local/lib/python3.8/site-packages/py2neo/database.py”, line 1063, in create
  create(self)
 File “/usr/local/lib/python3.8/site-packages/py2neo/data.py”, line 200, in __db_create__
  records = tx.run(*pq)
 File “/usr/local/lib/python3.8/site-packages/py2neo/database.py”, line 987, in run
  result = self._connector.run(self.ref, cypher, parameters)
 File “/usr/local/lib/python3.8/site-packages/py2neo/client/__init__.py”, line 1424, in run
  return cx.run(tx, cypher, parameters)
 File “/usr/local/lib/python3.8/site-packages/py2neo/client/bolt.py”, line 571, in run
  return self._run(tx.graph_name, cypher, parameters or {})
 File “/usr/local/lib/python3.8/site-packages/py2neo/client/bolt.py”, line 923, in _run
  response = self.append_message(0x10, cypher, parameters, extra or {})
 File “/usr/local/lib/python3.8/site-packages/py2neo/client/bolt.py”, line 726, in append_message
  self.write_message(tag, fields)
 File “/usr/local/lib/python3.8/site-packages/py2neo/client/bolt.py”, line 701, in write_message
  self._writer.write_message(tag, fields)
 File “/usr/local/lib/python3.8/site-packages/py2neo/client/bolt.py”, line 240, in write_message
  packer.pack(field)
 File “/usr/local/lib/python3.8/site-packages/interchange/packstream.py”, line 151, in pack
  self._pack_dict(value)
 File “/usr/local/lib/python3.8/site-packages/interchange/packstream.py”, line 285, in _pack_dict
  self.pack(item)
 File “/usr/local/lib/python3.8/site-packages/interchange/packstream.py”, line 120, in pack
  self._pack_list(value)
 File “/usr/local/lib/python3.8/site-packages/interchange/packstream.py”, line 247, in _pack_list
  self.pack(item)
 File “/usr/local/lib/python3.8/site-packages/interchange/packstream.py”, line 151, in pack
  self._pack_dict(value)
 File “/usr/local/lib/python3.8/site-packages/interchange/packstream.py”, line 285, in _pack_dict
  self.pack(item)
 File “/usr/local/lib/python3.8/site-packages/interchange/packstream.py”, line 200, in pack
  raise TypeError(“Values of type %s are not supported” % type(value))
TypeError: Values of type <class 'linkml_runtime.utils.yamlutils.extended_str'> are not supported

Possible solutions

(Working through this in: cancerDHC/ccdh-terminology-service#115)

1. Have an alternate class to YAMLRoot?

This line is what creates the nested object which contains enhanced_str:

model = yaml_loader.loads(yaml, target_class=YAMLRoot)

Maybe this is a failure of understanding on my part, but there does not appear to be ample documentation / code documentation for YAMLLoader.

class YAMLLoader(Loader):

    def load(self, source: Union[str, dict, TextIO], target_class: Type[YAMLRoot], *, base_dir: Optional[str] = None,
             metadata: Optional[FileInfo] = None, **_) -> YAMLRoot:
        def loader(data: Union[str, dict], _: FileInfo) -> Optional[Dict]:
            return yaml.load(StringIO(data), DupCheckYamlLoader) if isinstance(data, str) else data

There isn't too much info about target_class. But by the looks of it, we could have a subclass of YAMLRoot that uses plain str instead of enhanced_str perhaps? But I'm not sure what else would have to change. I imagine that enhanced_str is there for a reason, and simply converting every instance of it to str w/in the object would break something within the object/class.

Root loader doesn't handle lists

Line 60 in root loader reads:

return target_class(**data_as_dict) if data_as_dict is not None else None

This needs to be enhanced to be able to handle any form of data. At bare minimum, it should also handle lists. A short term fix might read:

return target_class(data_as_dict) if isinstance(data_as_dict, list) else target_class(**data_as_dict) if data_as_dict is not None else None, but we should consider calling the YAMLRoot _normalize_inlined_as_list or _normalize_inlined_as_dict element.

Note that this ties in with the issue about loading a schema as a dictionary as well as the one about self-identifying structures.

Dataclasses shim fails in python 3.10

The signature of the dataclasses._init_fn has changed in version 3.10:

def _init_fn(fields, std_fields, kw_only_fields, frozen, has_post_init,
             self_name, globals):

Version <= 3.9 had:

def _init_fn(fields, frozen, has_post_init, self_name, globals):

The purpose of this shim was to allow unexpected keywords to get passed through to the __post_init__ function. We need to a) see whether the shim is still needed in 3.10 and, if it is, b) update the (somewhat misnamed) dataclass_extensions_376.py file

An `if` statement missing

This is with regards to the code in distroutils.py, specifically function get_packaged_file_as_str().

The code block:

    for path in get_default_paths(file_type) + rel_paths:
        try:
            full_path = path / f'{package_name}.{suffix}'
            data = pkgutil.get_data(package, str(full_path))
            break
        except FileNotFoundError:
            logging.debug(f'candidate {path} not found')
    if not data:
        raise FileNotFoundError(f'package: {package} file: {file_type}')
    return data.decode(encoding)

The break on line 61 should be executed only if there is data returned (not None). So it should be :

for path in get_default_paths(file_type) + rel_paths:
        try:
            full_path = path / f'{package_name}.{suffix}'
            data = pkgutil.get_data(package, str(full_path))
            if data: # added this if statement
                break
        except FileNotFoundError:
            logging.debug(f'candidate {path} not found')
    if not data:
        raise FileNotFoundError(f'package: {package} file: {file_type}')
    return data.decode(encoding)

Reasoning:

The for loop just checks the first path returned by get_default_paths(file_type) and then breaks regardless of data being returned or not. Hence if the first path returned isn't the one for the schema file in the package, the code raises a FileNotFoundError.

Looks like the try-except statement is intended to take care of it but it doesn't.

ValueError: Unrecognized prefix: <uri>

I'm encountering an error with linkml-runtime which is ValueError: Unrecognized prefix: https://w3id.org/csolink/vocab/ .

I'm not sure why model with curie csolink:xxxx would generate this error. Any ideas?

prefixes:
  ACMBOOKS: 'https://dl.acm.org/action/doSearch?SeriesKey=acmbooks&AllField='  # acm Digital library books
  ACMJOURNALS: 'https://dl.acm.org/action/doSearch?ConceptID=118230&AllField='  # acm Digital library journals
  AML: 'https://w3id.org/i40/aml#'    # Automation machine learning
  astrovocab: 'https://www.asc-csa.gc.ca/eng/resources/vocabulary/view.asp?id='  # astronautics vocab
  csolink: 'https://w3id.org/csolink/vocab/'
  linkml: 'https://w3id.org/linkml/'
Run source env/bin/activate
Traceback (most recent call last):
  File "script/jekyllmarkdowngen.py", line 583, in <module>
    JekyllMarkdownGenerator(yamlfile=args.yaml, schema=args.yaml).serialize(directory=args.dir)
  File "/home/runner/work/csolink-model/csolink-model/env/lib/python3.7/site-packages/linkml/utils/generator.py", line 119, in serialize
    self.visit_subset(ss)
  File "/home/runner/work/csolink-model/csolink-model/env/lib/python3.7/site-packages/linkml/generators/markdowngen.py", line 265, in visit_subset
    curie = self.namespaces.uri_or_curie_for(self.namespaces._base, underscore(subset.name))
  File "/home/runner/work/csolink-model/csolink-model/env/lib/python3.7/site-packages/linkml_runtime/utils/namespaces.py", line 195, in uri_or_curie_for
    raise ValueError(f"Unrecognized prefix: {prefix}")
ValueError: Unrecognized prefix: https://w3id.org/csolink/vocab/
Error: Process completed with exit code 1.

Pure number tags don't parse correctly

  sample_enum:
    description: Type of material sample
    permissible_values:
      0: Blood

Breaks the parser because we try to construct a permissible value with (0="Blood")

induce_slot returns incorrect range

In the NMDC Schema the slot_usage for has input defines the range as biosample. However, view.induced_slot('has input', 'biosample processing').range returns named thing

yaml for biosample processing:

biosample processing:
    aliases:
      - material processing
    is_a: named thing
    description: >-
      A process that takes one or more biosamples as inputs and generates one or as outputs.
      Examples of outputs include samples cultivated from another sample or data objects created by instruments runs.
    slots:
      - has input
    slot_usage:
      has input:
        range: biosample

cc @cmungall

`AttributeError: type object 'ConstraintType' has no attribute 'UnmappedTypeConstraint'`

I am debugging a potentially faulty data model, and ran linkml-normalize on an example. This resulted in the exception

AttributeError: type object 'ConstraintType' has no attribute 'UnmappedTypeConstraint'

It is caused by the following line.

ConstraintType.UnmappedTypeConstraint, target.name, input_object

It was added in f597921

It appears that the corresponding permissible value has never been part of the ConstraintType enum.

https://github.com/linkml/linkml-runtime/blob/0c55320d243e57903d0f1aaf25aa1d9a786dd986/linkml_runtime/processing/validation_datamodel.yaml#L220C3-L220C17

Error: unknown url type: 'external_identifiers.yaml' when loading nmdc.yaml from pypi

@cmungall
I am needing/wanting to load SchemaView within nmdc_runtime. In the nmdc-schema PyPI module, I include the yaml files. So, I can load SchemaView like this:

nmdc_yaml = io.BytesIO(pkgutil.get_data("nmdc_schema", "nmdc.yaml"))
view = SchemaView(nmdc_yaml.getvalue().decode('utf-8'))

But when I run an operation like view.all_class(), it fails with error:

ValueError: unknown url type: 'external_identifiers.yaml'

Unit tests randomly fail because of non-pyldmod dependency

The vanilla pyld package does not allow "file:///" files. The pyldmod package does. For some reason pyld randomly creeps in on the unit tests, which results in a failure...

The correct way to fix this is to include an enhanced loader as part of this package and to supply it in the pyld options call. Unfortunately, custom loaders don't propagate all the way through the pyld call stack, so more work is needed

Modify `YAMLLoader.load()` signature to allow `target_class = None`?

Similar to how there is a mechanism to allow the inference of target_class in infer_root_class() in linkml.datautils, do we also want to add the utility to linkml-runtime so the YAMLLoader.load() method allows us to specify target_class = None when we make a call to YAMLLoader.load()?

The reason why I raised this issue is because I was trying to write a unit test for issue #494, and I couldn't simulate the conditions required for that test. When we create the object we want to validate against our schema using YAMLLoader.load(), we run into a TypeError: 'NoneType' object is not callable since the target_class being passed is None.

See PR #550.

CC: @cmungall

Test suite errors and warnings (windows)

Env: Win10 Pro (German)
linkml-runtime: main-branch a8b4dd1

(.venv) λ pytest
====================================== test session starts =======================================
platform win32 -- Python 3.10.2, pytest-7.0.1, pluggy-1.0.0
rootdir: C:\Users\david\MyProg\linkml-runtime
collected 83 items
...
============== 4 failed, 74 passed, 4 skipped, 1 xfailed, 7 warnings in 28.60s ===============

Errors

============================================ FAILURES ============================================ 
_____________________________ Issue368TestCase.test_issue_368_enums ______________________________

self = <tests.test_issues.test_issue_368_enums.Issue368TestCase testMethod=test_issue_368_enums>

    def test_issue_368_enums(self):
        """ Test Enum generation """

>       module = compile_python(env.input_path('issue_368.py'))

tests\test_issues\test_issue_368_enums.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
linkml_runtime\utils\compile_python.py:47: in compile_python
    exec(spec, module.__dict__)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

>   ???
E   ModuleNotFoundError: No module named 'tests\\test_issues\\input'

test:23: ModuleNotFoundError
________________________________ Issue6TestCase.test_loc_function ________________________________

self = <tests.test_issues.test_issue_6.Issue6TestCase testMethod=test_loc_function>

    def test_loc_function(self):
        inp = yaml.load(hbreader.hbread(inp_yaml), DupCheckYamlLoader)
        self.assertEqual('File "<unicode string>", line 3, col 8: ', TypedNode.yaml_loc(inp['foo']['x']))
        self.assertEqual('File "<unicode string>", line 3, col 8', TypedNode.yaml_loc(inp['foo']['x'], suffix=''))
        self.assertEqual('File "<unicode string>", line 4, col 8: ', TypedNode.yaml_loc(inp['foo']['y']))
        self.assertEqual('File "<unicode string>", line 4, col 8I yam that I yam',
                         TypedNode.yaml_loc(inp['foo']['y'], suffix=inp['foo']['y']))
        self.assertEqual('File "<unicode string>", line 5, col 8: ', TypedNode.yaml_loc(inp['foo']['z']))

        outs = StringIO()
        with redirect_stderr(outs):
            self.assertEqual('File "<unicode string>", line 3, col 8', TypedNode.loc(inp['foo']['x']))
>       self.assertIn('Call to deprecated method loc. (Use yaml_loc instead)', outs.getvalue())
E       AssertionError: 'Call to deprecated method loc. (Use yaml_loc instead)' not found in ''

tests\test_issues\test_issue_6.py:31: AssertionError
________________________________ LoadersUnitTest.test_json_loader ________________________________

self = <tests.test_loaders_dumpers.test_loaders.LoadersUnitTest testMethod=test_json_loader>

    def test_json_loader(self):
        """ Load obo_sample.json, emit obo_sample_json.yaml and check the results """
>       self.loader_test('obo_sample.json', Package, json_loader)

tests\test_loaders_dumpers\test_loaders.py:30:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests\test_loaders_dumpers\loaderdumpertestcase.py:63: in loader_test
    self.assertEqual('tests/test_loaders_dumpers/input', os.path.relpath(metadata.base_path, rel_path))
E   AssertionError: 'tests/test_loaders_dumpers/input' != 'tests\\test_loaders_dumpers\\input'
E   - tests/test_loaders_dumpers/input
E   ?      ^                    ^
E   + tests\test_loaders_dumpers\input
E   ?      ^                    ^
________________________________ LoadersUnitTest.test_yaml_loader ________________________________

self = <tests.test_loaders_dumpers.test_loaders.LoadersUnitTest testMethod=test_yaml_loader>

    def test_yaml_loader(self):
        """ Load obo_sample.yaml, emit obo_sample_yaml.yaml and compare to obo_sample_output.yaml """
>       self.loader_test('obo_sample.yaml', Package, yaml_loader)

tests\test_loaders_dumpers\test_loaders.py:26:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests\test_loaders_dumpers\loaderdumpertestcase.py:63: in loader_test
    self.assertEqual('tests/test_loaders_dumpers/input', os.path.relpath(metadata.base_path, rel_path))
E   AssertionError: 'tests/test_loaders_dumpers/input' != 'tests\\test_loaders_dumpers\\input'
E   - tests/test_loaders_dumpers/input
E   ?      ^                    ^
E   + tests\test_loaders_dumpers\input
E   ?      ^                    ^
------------------------------------ Captured stderr teardown ------------------------------------ Validation error: File "tests\test_loaders_dumpers\output\dump\obo_sample_context.json", line 1
Stack:  File "C:\Users\david\MyProg\linkml-runtime\tests\support\test_environment.py", line 103 in log
                File "C:\Users\david\MyProg\linkml-runtime\tests\support\test_environment.py", line 241 in eval_single_file
                File "C:\Users\david\MyProg\linkml-runtime\tests\test_loaders_dumpers\loaderdumpertestcase.py", line 33 in dump_test
                File "C:\Users\david\MyProg\linkml-runtime\tests\test_loaders_dumpers\test_dumpers.py", line 55 in test_json_dumper
Output was changed.


Validation error: File "tests\test_loaders_dumpers\output\dumps\obo_sample_context.json", line 1
Stack:  File "C:\Users\david\MyProg\linkml-runtime\tests\support\test_environment.py", line 103 in log
                File "C:\Users\david\MyProg\linkml-runtime\tests\support\test_environment.py", line 241 in eval_single_file
                File "C:\Users\david\MyProg\linkml-runtime\tests\test_loaders_dumpers\loaderdumpertestcase.py", line 45 in dumps_test
                File "C:\Users\david\MyProg\linkml-runtime\tests\test_loaders_dumpers\test_dumpers.py", line 58 in test_json_dumper
Output was changed.

Warnings

======================================== warnings summary ======================================== tests\support\test_environment.py:35
  C:\Users\david\MyProg\linkml-runtime\tests\support\test_environment.py:35: PytestCollectionWarning: cannot collect test class 'TestEnvironment' because it has a __init__ constructor (from: tests/support/test_environment.py)
    class TestEnvironment:

.venv\lib\site-packages\_pytest\fixtures.py:229
  C:\Users\david\MyProg\linkml-runtime\.venv\lib\site-packages\_pytest\fixtures.py:229: UserWarning: Code: _pytestfixturefunction is not defined in namespace XSD
    fixturemarker: Optional[FixtureFunctionMarker] = getattr(

tests/test_issues/test_issue_6.py::Issue6TestCase::test_loc_function
  C:\Users\david\MyProg\linkml-runtime\tests\test_issues\test_issue_6.py:30: DeprecationWarning: Call to deprecated method loc. (Use yaml_loc instead)
    self.assertEqual('File "<unicode string>", line 3, col 8', TypedNode.loc(inp['foo']['x']))

tests/test_utils/test_metamodelcore.py::MetamodelCoreTest::test_datetime
  C:\Users\david\MyProg\linkml-runtime\tests\test_utils\test_metamodelcore.py:167: UserWarning: Code: datetime is not defined in namespace XSD
    self.assertEqual('2019-07-06T17:22:39.007300', XSDDateTime(Literal(v, datatype=XSD.datetime)))

tests/test_utils/test_schemaview.py::SchemaViewTestCase::test_schemaview
tests/test_utils/test_schemaview.py::SchemaViewTestCase::test_schemaview
tests/test_utils/test_schemaview.py::SchemaViewTestCase::test_schemaview
  C:\Users\david\MyProg\linkml-runtime\linkml_runtime\utils\schemaview.py:710: DeprecationWarning: Call to deprecated method all_element. (Use 'all_elements' instead)
    elements = self.all_element()

uritypes_from_tccm need to be imported

linkml_utils/utils/uritypes_from_tccm.py is generated from the tccm model. We need to create an importable version of the model as this file won't age well

Add poetry specific contribution guidelines to CONTRIBUTING.md

Break down the CONTRIBUTING.md document into the following sections and add content:

How to contribute

  • Reporting bugs or feature requests
  • General contribution instructions
  • The development cycle
  • Development environment setup
  • Release process
  • Testing
  • Code style convention

windows tests are failing

E.g.
#211

 poetry install --no-interaction --no-root
     |  ~~~~~~
     | The term 'poetry' is not recognized as a name of a cmdlet, function, script file, or executable
     | program. Check the spelling of the name, or if a path was included, verify that the path is correct
     | and try again.

I see

Setting Poetry installation path as /c/Users/runneradmin/.local

Installing Poetry 👷

Retrieving Poetry metadata

# Welcome to Poetry!

This will download and install the latest version of Poetry,
a dependency and package manager for Python.

It will add the `poetry` command to Poetry's bin directory, located at:

C:\Users\runneradmin\.local\bin

You can uninstall at any time by executing this script with the --uninstall option,
and these changes will be reverted.

Installing Poetry (1.2.2)
Installing Poetry (1.2.2): Creating environment
Installing Poetry (1.2.2): Installing Poetry
Installing Poetry (1.2.2): Creating script
Installing Poetry (1.2.2): Done

Poetry (1.2.2) is installed now. Great!

To get started you need Poetry's bin directory (C:\Users\runneradmin\.local\bin) in your `PATH`
environment variable.

Alternatively, you can call Poetry explicitly with `C:\Users\runneradmin\.local\bin\poetry`.

You can test that everything is set up by executing:

`poetry --version`

is this new? Any idea for the best way to fix this @dalito?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.