Coder Social home page Coder Social logo

awslabs / fhir-works-on-aws-persistence-ddb Goto Github PK

View Code? Open in Web Editor NEW
27.0 10.0 21.0 638 KB

A DynamoDB implementation of the FHIR Works on AWS framework, enabling users to complete CRUD operations on FHIR resources

License: Apache License 2.0

JavaScript 0.68% TypeScript 99.32%
hl7 fhir healthcare nodejs aws typescript fhir-works

fhir-works-on-aws-persistence-ddb's Introduction

fhir-works-on-aws-persistence-ddb

This GitHub repository has been migrated. You can now find FHIR Works on AWS at https://github.com/aws-solutions/fhir-works-on-aws.

Purpose

Please visit fhir-works-on-aws-deployment for overall vision of the project and for more context.

This package is an implementation of the persistence & bundle components of the FHIR Works interface. It is responsible for executing CRUD based requests from the router. To use and deploy this component (with the other 'out of the box' components) please follow the overall README

Infrastructure

This package assumes certain infrastructure:

  • DynamoDB - The table name defined by the environment variable RESOURCE_TABLE
    • Partition key is 'id' and sort key is 'vid'
  • Elasticsearch - The Elasticsearch domain is defined by the environment variable ELASTICSEARCH_DOMAIN_ENDPOINT
    • Indexes are defined by the resource type
  • S3 Bucket - The bucket name is defined by the environment variable FHIR_BINARY_BUCKET

Usage

For usage please add this package to your package.json file and install as a dependency. For usage examples please see the deployment component's package.json

Dependency tree

This package is dependent on:

Package that depends on fhir-works-on-aws-persistence-ddb package:

  • deployment component
    • This package deploys fhir-works-on-aws-persistence-ddb and all the other default components

Known issues

For known issues please track the issues on the GitHub repository

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

fhir-works-on-aws-persistence-ddb's People

Contributors

allanhodkinson avatar amazon-auto avatar awsbakha avatar bingjiling avatar carvantes avatar dependabot[bot] avatar emilhdiaz avatar justinusmenzel avatar kcadette avatar nguyen102 avatar nisankep avatar rsmayda avatar sanketd92 avatar ssvegaraju avatar zambonilli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fhir-works-on-aws-persistence-ddb's Issues

Detect all invalid resources instead of failing in Elastic Search.

Invalid resources are being created in DynamoDB and then failing in Elastic Search:

For example, money types like "net": 12345 are allowed when "net": { "value": 12345, "currency": "USD"} are expected. This stores the object in DynamoDB but fails during elastic search.

This occurs despite the schema validator specifying that Money is a dictionary.

AC:

Entering an invalid object should fail.

Sync up between DynamoDB and ES

We had few ES clusters corruption happened and data is not in sync between dynamoDB and ES data-stores. Is there any enhancements planned for syncing up data between these data stores ?

ES is always prone to data unreliability and having a function within persistence ddb will help in these type of situations.

Loss of resource meta data in v3.3.1

Updating to the latest version of the package (3.3.0 -> 3.3.1) results in the loss of content in the meta data resource attribute.

In v3.3.0 when updateResource() is called the returned resource (the result of the update operation) contains (for example):

[MedicationRequest]

{ versionId: "2", lastUpdated: "2021-04-13T23:10:07.899Z", }

In v3.3.1 when updateResource() is called the returned resource meta attribute contains:

null

Back the Dynamo DB to Elasticsearch data synchronization with a message queue

Problem statement

During large batch updates Elasticsearch sometimes can't keep up with updates if a lot of them are done in parallel through the existing mechanism where Dynamo DB write events are being received by a lambda which writes to Elasticsearch in parallel.

Proposed change

The lambda triggered by Dynamo DB write events should just write each event into a SQS message queue instead of trying to sync the data directly to Elasticsearch. (Maybe Dynamo DB can be configured so the write events are being fed into SQS without the need of a lambda)
Another lambda picks the messages up from SQS and sends the updates to Elasticsearch. The main point of using a message queue is the decoupling of the Dynamo DB write events from the writes to Elasticsearch which tend to be slower.
To accommodate to the slower processing speed of Elasticsearch the lambda needs to throttle the message consumption rate and the message queue needs to be able to retain all unprocessed messages. That number could grow quite large. Eventually though all messages should be processed and every write event sent to Elasticsearch.

[Feature Request] Need ability to specify index/alias name in handleDdbToEsEvent

Is your feature request related to a problem? Please describe.
Need a way to populate ElasticSearch index without using the standard alias used in AWS Fhirworks DDB to ES Lambda. Since the lambda may fail, we need ways to populate the index with data or even want to rebuild specific indices if needed. Currently the code to do this - handleDdbToEsEvent - makes an assumption on the alias name to use. This alias is backed by an index. We want to be able to populate that index independent of the current index that backs the alias. That way, we can reindex and then switch the alias without a downtime. We plan to use the above method - but the assumption made in the code always forces the data to go through a fixed alias name.

Describe the solution you'd like
There should be a way to specify the target index in the method handleDdbToEsEvent. This can either be a part of the entries in the event object that is a method parameter or a callback function - that - given a resourceType - can return the index to use.

Describe alternatives you've considered
We currently use our code to batch the updates. We do want to use the AWS method once it supports batch.

Additional context
Add any other context or screenshots about the feature request here.

[Bug] Back-to-back batch updates of the same record creates duplicate records

Describe the bug
Sending a batch update with a PUT to a specific practitioner, and then after the first batch resolves and returns a response, immediately sending another batch update containing a PUT to that same specific practitioner, creates a duplicate record rather than updating.

  • Both records use the same ID, and only the latest version is returned when calling GET against that practitioner.
  • When running a search based on e.g. name or NPI, both versions of the record are returned.
  • Calling DELETE against the record will only delete the latest version, but the previous version is left ACTIVE in DynamoDB, and therefore still returns in the search results.
  • Calling DELETE again to try and delete the still-active record returns 404 Not Found.

To Reproduce
Steps to reproduce the behavior:

  1. Create a new practitioner record with a unique name.
  2. Send a batch request to fhirworks containing a PUT request to /Practitioner/${id of record you just created}
  3. As soon as that request resolves, send it again. You have to send it after the previous request resolves, but before DynamoDB's commit window so the changes aren't quite persisted to the DB or the search service yet (admittedly I don't know much about how it works under the hood; this is just my understanding based on preliminary poking around).
  4. Perform a search against the practitioner endpoint based around the practitioner's name.
  5. You will see two records in the search results with the same ID.
  6. Send a DELETE request against the practitioner and re-run the search.
  7. You will now see one record in the search results.
  8. Send a GET request to the practitioner endpoint, using the ID of the practitioner.
  9. You will get 404 Not Found, despite that practitioner appearing in the search results.

Expected behavior
Successive PUT requests to the same endpoint (batched or not) should be queued to run sequentially, and await the DynamoDB and/or Elasticsearch commit time to ensure duplicate records are not created.

Versions (please complete the following information):

  • Release Version installed: 4.3.1

Additional context
We are working on a bulk import process, and our sample file contained the same ~50 records repeated over and over again just to quickly fill out test data. The records were grouped up into chunks of 10-20 and then imported, making sure that each individual request contains no duplicates, and that each request is awaited and fully resolved before doing the next one. We now have 18000+ duplicate records in our test system that we cannot delete by any means.

For the time being, as a workaround, we've had to simply state that all import files must be manually checked for duplicate identifiers before being imported, to make 100% sure that the same record isn't updated twice in too short of a timespan. This obviously isn't ideal as some of the files contain millions of records.

Posting Resources with client specific ids

We are working on FHIR(Fast Healthcare Interoperability Resources).
We have followed “FHIR works on AWS” and deployed the Cloud Formation template given by AWS in our AWS environment. Following is the template that we have deployed.

https://docs.aws.amazon.com/solutions/latest/fhir-works-on-aws/aws-cloudformation-template.html

Requirement : we want to maintain client specific/customized ids as primary key in the server.
Problem : server not allowing us to override or maintain client specific (customized) ids as primary key. Infact, in the runtime, it is generating its own ids and ignoring the ids provided by us.
Could you please let us know if there is any way to post the FHIR resource with client specific ids into FHIR server(Dynamo DB).

[Feature Request] Keep Track of Resource Count in DDB by Resource Type

Is your feature request related to a problem? Please describe.
We sync resources from dynamodb to elasticsearch, but errors can occur in the sync process. We want to create a report about the resource count in DDB and ES to check whether resource counts in ES match those in DDB.
To get the counts in ddb by scanning the table is not feasible for production; if we use DynamoDB stream to call a Lambda function to update the counts, it is hard to keep the results accurate, since errors can happen in the step of updating the counts.

Describe the solution you'd like
We would like to add some logic to update the resource counts and store them somewhere (e.g. a new ddb table) when a resource is created or deleted, and updating counts must be in the same transaction as creating/deleting resources.

Describe alternatives you've considered
switch to other databases that support queries with aggregation

Additional context
Add any other context or screenshots about the feature request here.

[Bug] Duplicate records for some resource identifiers in AVAILABLE status

Describe the bug
We run ETL jobs to load data into DynamoDB using the DynamoDbBundleService's transaction method. Our approach involves creating FHIR resources as a bundle, parse them using the BundleParser and then call the above method to persist the contents. One other aspect of our transformation that may be causing this issue is the fact that we load historical data in large volumes and use an algorithm to predictably generate the identifier of the target FHIR resources. That can potentially cause race conditions when the same resource ID of a resource type is being updated from more than one workers (multiple processes implement the ETL).

The present approach of locking seems to be failing here thus causing multiple versions with same resource identifier to have the AVAILABLE status. A side-effect of this is the presence of duplicate records in ElasticSearch as well.

To Reproduce
Steps to reproduce the behavior:

  1. Take multiple records with same identifier and simultaneously try to update - preferably from multiple processes.

Expected behavior
A locking error should be predictably thrown so the calling code can catch, analyze and retry as needed.

Screenshots
If applicable, add screenshots to help explain your problem.

Versions (please complete the following information):

  • Release Version installed [e.g. v1.0.3]

Additional context
Add any other context about the problem here.

Unable to create Patient resource in transaction Bundle

When attempting to create a Patient resource within a transaction Bundle, the request fails returning a HTTP/400 response with the following error:

{
    "resourceType": "OperationOutcome",
    "issue": [
        {
            "severity": "error",
            "code": "invalid",
            "diagnostics": "Cannot read property 'length' of undefined"
        }
    ]
}

The root cause of the error is the relative Organization reference provided in the managingOrganization field. The server is failing to parse the reference value as it is expecting an absolute URL instead.

If we change the request to include an absolute URL with the base URL for the server, the request completes successfully, creating the Patient resource. However, retrieving the Patient resource created returns an incorrect value in the managingOrganization field. Instead of the expected absolute URL, a relative value is returned with the wrong resource type, "Patient" instead of "Organization". The logical ID is correct though.

Here is the Bundle request we tested with:

{
    "resourceType": "Bundle",
    "type": "transaction",
    "entry": [
        {
            "resource": {
                "resourceType": "Patient",
                "active": true,
                "name": [
                    {
                        "family": "Smith",
                        "given": [
                            "John"
                        ]
                    }
                ],
                "gender": "male",
                "birthDate": "1996-09-24",
                "managingOrganization": {
                    "reference": "{fhirBaseUrl}/Organization/2f3df8d8-863e-47fb-a9b9-c634c8794fb9"
                },
                "address": [
                    {
                        "city": "Halifax",
                        "state": "NS",
                        "postalCode": "B4B2C3",
                        "country": "Canada"
                    }
                ]
            },
            "request": {
                "method": "POST",
                "url": "Patient"
            }
        }
    ]
}

And here is the Patient resource created:

{
    "birthDate": "1996-09-24",
    "meta": {
        "lastUpdated": "2021-05-01T12:25:02.256Z",
        "versionId": "1"
    },
    "managingOrganization": {
        "reference": "Patient/2f3df8d8-863e-47fb-a9b9-c634c8794fb9"
    },
    "address": [
        {
            "country": "Canada",
            "state": "NS",
            "city": "Halifax",
            "postalCode": "B4B2C3"
        }
    ],
    "name": [
        {
            "family": "Smith",
            "given": [
                "John"
            ]
        }
    ],
    "gender": "male",
    "active": true,
    "resourceType": "Patient",
    "id": "833164a5-7981-4e0e-a698-c6fb29340de5"
}

This issue was discovered while testing with v2.6.0.

[Misc] elasticsearch-js compatibility with OpenSearch

What's on your mind?
Sorry, this might be a dumb question and not urgent. How is the current ddbToEsSync logic connecting to OpenSearch clusters without error when using elasticsearch-js 7.4? Elastic added a gem to their connection logic in elasticsearch-js to fail the connection if the cluster responds saying that the cluster is running a version that is licensed under oss. When using the same exact versions of @elastic/elasticsearch (^7.4), aws-elasticsearch-connector (^8.2.0) and aws-sdk (^2.610.0) in a fresh node app I receive the OSS error trying to connect to our fwoa elastic cluster. The recommended fix is to pin to @elastic/elasticsearch version 7.13 and it looks like the OpenSearch team is working on a fork that works with OSS. With my new node app, rolling back to 7.13 fixed the connect failure but I'm wondering as to how we're not seeing the same error in fhir-works-on-aws-persistence-ddb when it's using @elastic/elasticsearch. I didn't see any pushes mentioning the failure and the connect code doesn't look to have any special logic.

elastic/elasticsearch-js#1519
https://aws.amazon.com/blogs/opensource/keeping-clients-of-opensearch-and-elasticsearch-compatible-with-open-source/

Versions (please complete the following information):

  • v3.8.1

It is not possible to create resources with reference when using bundle

Hi AWS Team,
Thanks for implementing FHIR Works.

For example,
when I register an Observation resource using {{API_URL}}/Observation, with:

    "subject": {
        "reference": "Patient/test-patient-01"
    }, 

and

    "device": {
        "reference": "Device/test-device-01"
    },

it worked fine for me.
But when I used the same Observation resource to register using bundle {{API_URL}},
An error has occurred:

    "issue": [
        {
            "severity": "error",
            "code": "invalid",
            "diagnostics": "Cannot read property 'length' of undefined"
        }
    ]

is this a bug?

Thanks in advance for any help you are able to provide.

Support Conditional CRUD operations

FHIR does have support for conditional operations but it is currently in trial use only. This issue will serve as a tracker on our progress to support this new functionality. Also if this is a feature you'd like please let us know

FHIR spec for context: https://www.hl7.org/fhir/http.html#cond-update

A/C:

  • Add functionality
  • Update integration test
  • Update deploy package to support it

Use Profile-specific indexing instead of Dynamic Indexing

Use the FHIR Resources/Profiles which specify the required search parameters to generate and update the indices:

Dynamic Indexing of FHIR documents consumes lots of compute resources and time, and causes scalability issues, especially during bulk issues, and does not scale.

[Bug] Synchronization of DDB data with ES using the DDB-to-ES lambda is slow

Describe the bug
The synchronization of DynamoDB data to ElasticSearch using the DynamoDB stream is slow and results in large number of errors when the data volume is high.

To Reproduce
Steps to reproduce the behavior:

  1. Load a large amount of data in DynamoDB
  2. Check the aws cloudwatch console to monitor ElasticSearch
  3. The synchronization goes on for a large time with large number of errors.

Expected behavior
The synchronization should complete within acceptable time and reduced errors.

Screenshots
If applicable, add screenshots to help explain your problem.

Versions (please complete the following information):

  • 3.4.0

Additional context
As discussed with AWS team, using batch update APIs for ElasticSearch may significantly address this issue.

Add Profile Support

Add Profile support for:

  • US Core

  • Carin-BB

  • DaVinci PDEX

  • DaVinci PDEX Formulary

  • DaVinci PDEX Plan Net.

  • Patient API
    -- Patient API - General
    --- C4BB Patient
    -- Patient API - Claims & Encounters
    --- C4BB Coverage
    --- C4BB ExplanationOfBenefit Inpatient Institutional
    --- C4BB ExplanationOfBenefit Outpatient Institutional
    --- C4BB ExplanationOfBenefit Pharmacy
    --- C4BB ExplanationOfBenefit Professional NonClinician
    --- Location
    --- C4BB Organization
    --- C4BB Practitioner
    --- US Core PractitionerRole Profile
    --- Patient API - Clinical
    --- US Core AllergyIntolerance Profile
    --- US Core CarePlan Profile
    --- US Core CareTeam Profile
    --- US Core Condition Profile
    --- PDex Device
    --- US Core DiagnosticReport Profile for Laboratory Results Reporting
    --- US Core DiagnosticReport Profile for Report and Note exchange
    --- US Core Laboratory Result Observation Profile
    --- US Core DocumentReference Profile
    --- US Core Encounter Profile
    --- US Core Goal Profile
    --- US Core Immunization Profile
    --- US Core Medication Profile
    --- US Core Medication Request Profile
    --- PDEX MedicationDispense
    --- US Core Medication Profile
    --- US Core MedicationRequest Profile
    --- US Core Pediatric BMI for Age Observation Profile
    --- US Core Pediatric Head Occipital-frontal Circumference Percentile Profile
    --- US Core Pediatric Head Occipital-frontal Circumference Percentile Profile
    --- US Core Pediatric Weight for Height Observation Profile
    --- US Core Pulse Oximetry Profile
    --- US Core Smoking Status Observation Profile
    --- US Core Procedure Profile

  • PdexEntitySourceProvenance

  • PDex Source Provenance

  • Vital Signs Profile

  • Interoperability API

  • Interoperability - Drug Formulary

  • Coverage Plan

  • Formulary Drug (extends MedicationKnowledge)

  • Interoperability - Provider Directory

  • PlannetEndpoint

  • PlannetHealthcareService

  • PlannetInsurancePlan

  • PlannetLocation

  • PlannetOrganizationAffiliation

  • PlannetPractitionerRole

Throw error if resource found does not match resourceID

Defect

Currently DynamoDbHelper does not validate that the resourceType of the DynamoDB record retrieved via getMostRecentResource() or getMostRecentValidResource() matches the resourceType passed into those functions. This allows an API client to request resources of a different FHIR resource type through the wrong URLs.

Actual
A Patient record with id=8a9f93c9-0f37-4a98-bb55-76992774a98b can be retrieved through the URL /ExplanationOfBenefits/8a9f93c9-0f37-4a98-bb55-76992774a98b.

Expected
The above Patient should only be accessible via the URL /Patient/8a9f93c9-0f37-4a98-bb55-76992774a98b. Additionally, the URL /ExplanationOfBenefits/8a9f93c9-0f37-4a98-bb55-76992774a98b, should return an HTTP 404 - Not Found.

[updateCreate] Support update operation creating initial version of resource

Hi AWS team,
If we try to create a new resource via an update operation (as per https://www.hl7.org/fhir/http.html#update), the current code doesn't support this.
Previously we were doing this via a create operation but the code changes on Feb 5th (4f433d2) now prevent us from doing this!

I'd have though it'd be relatively easy to trap the resource not found case in

async updateResource(request: UpdateResourceRequest) {

and then invoke the create operation?
Many thanks,
Allan

Support resources that are >400kb

FHIR resource are currently stored in DynamoDb and Dynamo has a row size limitation of 400kb. Meaning if a user tries to create a Patient record >400kb it will fail. This task is researching & implementing how we can remove this limitation.

A/C:

  • Add functionality
  • Update integration test

[Bug] Response Resource of creating Binary resource doesn't contain custom fields with HAPI FHIR library

Describe the bug
PresignedPutUrl field in Binary resource is not complied with FHIR spec, hence a popular FHIR library HAPI FHIR doesn't return the response Binary with PresignedPutUrl field while creating a Binary Resource.

Of course, I can reach out to HAPI FHIR team to support custom Resource fields. They may add the support. However, since custom fields are not aligned with FHIR spec, I think it would be better to fix the issue in this repo.

One possible solution is this PR. In the PR, I moved the custom fields to _data.extension. You may already know the parent of Binary is Resource, meaning we cannot add extensions directly to the Binary resource. That's why I use the field extension.

To Reproduce
Steps to reproduce the behavior:

  1. Use HAPI FHIR client to create Binary resource
        Binary binary = new Binary();
        binary.setContentType("application/pdf");
        MethodOutcome response = this.hapiFhirClient.create()
                .resource(binary)
                .encodedJson()
                .execute();
        Binary responseResource = (Binary) response.getResource();

Expected behavior
responseResource has the value of PresignedPutUrl

Additional context
Add any other context about the problem here.

Add additional attribute for internal index

Add an attribute for an internal index with a global secondary index across it. The need is so that there can be an additional lookup outside of Elastic Search. The local index is with in the JSON and if this index could be sent and stored within this new index attribute our process from Flink could do a direct lookup through DynamoDb thus helping with performance.

[Feature Request] Need New documentStatus for Old Versions of Resources and Reverted Resources

Is your feature request related to a problem? Please describe.
If a resource is updated with a new version, the old version is marked as 'DELETED'. This makes the reference to the old version points to an invalid ('DELETED') resource.

Sometimes we need to do a 'reverting' for the resources added to the dynamodb, but we can not do a hard delete or mark it as 'DELETED' because there might be a reference pointing to it. We would like a new status (e.g. 'REVERTED')

Describe the solution you'd like
We would like to have a new status (e.g. OLD, STALE, OUT_OF_DATE, etc) for old versions of resources.
when a resource is updated with a new version, the old version is updated to this new status (in dynamoDbBundleService).

After reverting, the reverted version could be the only one for the resource. We would like to make the new status (e.g. 'REVERTED') user-readable (in dynamoDbHelper.getMostRecentUserReadableResource())

We would like to have resources with these new statuses indexed in ES by ddbToESSync.

Describe alternatives you've considered

Additional context

Return error if createResource has a duplicate id specified

The fhir-works-on-aws-persistence dynamoDbDataService createResource function allows specifying a database id (for DynamoDB, a generated UUID), but it does not check to see if the ID has already been used. It should return an error. We are using the internal API during uploads instead of going through the FHIR API. The createResource method is defined in

async createResource(request: CreateResourceRequest) {
.

Acceptance criteria:

Writing a record using the fhir-works-on-aws-persistencedynamoDbDataService createResource returns an error if the id is specified and it already exists in the database.

Support bundle size >25 entries

Currently there is a self-imposed limitation of bundles must be 25 entries or less. This is due to the DynamoDB batch operations of supporting 25 operations at once. This task will create functionality to break the bundle requests into 25 request 'chunks' and iteratively sending the updates to DynamoDB.

NOTE

  • 'batch' requests do not require as much overhead and I would suggest working on the batch feature first prior to this one
  • Also after this is resolve there will then be a new limitation of API gateway/Lambda request and response size of 6mb

FHIR spec for context: https://www.hl7.org/fhir/http.html#transaction

A/C:

  • Add functionality
  • Update integration test

AWS region us-west-2 hardcoded in IS_OFFLINE mode

We are attempting to run the fhir-service lambda locally with the environment variable IS_OFFLINE=true, but while still communicating with real DynamoDB and ElasticSearch instances in the us-east-1 AWS region.

When the IS_OFFLINE environment variable is set, the AWS Javascript SDK is reconfigured to use the (hard-coded) us-west-2 region. This region should also be detected from environment variables so that it can be dynamically configured. This would allow a local instance of the fhir-service server to communicate with AWS resource in the region of choice.

Version: v2.0.1

DynamoDB streams to ES Issue - Not able to handle different date formats.

Resources are not getting loaded into Elasticsearch from DynamoDB streams, if same date field is having different date time formats. As per HL7, date can be any of following formats - "2020", "2021-01" or "1905-08-23", "2015-02-07T13:28:17-05:00 or 2017-01-01T00:00:00.000Z".

For example from FHIR website example for allergyIntolerance,

manifestation": [
        {
          "coding": [... ],
      "description": "Challenge Protocol. Severe reaction to subcutaneous cashew extract. Epinephrine administered",
      **"onset": "2012-06-12",**
      "severity": "severe",
      "exposureRoute": {
        "coding": [
         ...
        ]
      }
    },
    {
      "manifestation": [
        {
          "coding": []
        }
      ],
      **"onset": "2004",**

Here Onset is having two different formats, so when ES builds index for Onset field it is considered as dateTime and for next Onset it is considering it to be Text/String. So records are not getting loaded into ES.

Note: Failed records are not moved to DLQ and no logs are available. We need to add logs/DLQ to find out these type of bugs.

Maintain referential integrity for creates

Resources can be created with missing References that should return an error. They should return a HTTP 422 Unprocessable Entity.

For example, use Postman to create a Patient using the following HTTP body:

{
    "active": true,
    "maritalStatus": "U",
    "resourceType": "Patient",
    "identifier": [
        {
            "period": {
                "start": "2011-03-29T05:05:56.968Z",
                "end": "2025-02-07T13:28:17-05:00"
            },
            "system": "http://www.acmehosp.com/patients",
            "type": "TAX",
            "value": "P7822793277",
            "use": "usual"
        }
    ],
    "meta": {
        "lastUpdated": "2020-09-22T16:32:31.155Z",
        "versionId": "1"
    },
    "managingOrganization": {
        "reference": "Organization/1"
    },
    "id": "8adfb20d-37b4-471c-ae59-39b558606aa0",
    "name": [
        {
            "given": [
                "Hosea"
            ],
            "text": "Hosea Grady",
            "family": "Grady",
            "suffix": [
                "IV"
            ],
            "use": "official",
            "prefix": []
        }
    ],
    "gender": "other"
}

The organization id (1) is invalid. The API returns 201 (Created), it should return 422.

Use the Organization GET API with the organization id of 1. It returns 404, indicating that the Organization is missing.

For an example of the correct functionality, use the http://hapi.fhir.org/create test page.

AC:

Creating any resource with a reference to missing resource returns 422 instead of success.

Maintain referential integrity for deletes

Resources can be deleted when they are referenced with other resources. They should return a HTTP 422 Unprocessable Entity.

For example, use Postman to create this Organization:

{
    "resourceType": "Organization",
    "identifier": [
        {
            "value": "Gastro",
            "system": "http://www.acme.org.au/units"
        }
    ],
    "meta": {
        "lastUpdated": "2020-09-18T20:10:46.184Z",
        "versionId": "1"
    },
    "telecom": [
        {
            "value": "+1 555 234 3523",
            "use": "mobile",
            "system": "phone"
        },
        {
            "value": "[email protected]",
            "use": "work",
            "system": "email"
        }
    ],
    "text": {
        "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\">\n      \n      <p>Gastroenterology @ Acme Hospital. ph: +1 555 234 3523, email: \n        <a href=\"mailto:[email protected]\">[email protected]</a>\n      </p>\n    \n    </div>",
        "status": "generated"
    },
    "name": "Gastroenterology"
}

Note the OBSERVATION_ID and use it to create this Patient in Postman:

{
    "active": true,
    "maritalStatus": "U",
    "resourceType": "Patient",
    "identifier": [
        {
            "period": {
                "start": "2011-03-29T05:05:56.968Z",
                "end": "2025-02-07T13:28:17-05:00"
            },
            "system": "http://www.acmehosp.com/patients",
            "type": "TAX",
            "value": "P7822793277",
            "use": "usual"
        }
    ],
    "meta": {
        "lastUpdated": "2020-09-22T16:32:31.155Z",
        "versionId": "1"
    },
    "managingOrganization": {
        "reference": "Organization/<ORGANIZATION_ID>"
    },
    "id": "8adfb20d-37b4-471c-ae59-39b558606aa0",
    "name": [
        {
            "given": [
                "Hosea"
            ],
            "text": "Hosea Grady",
            "family": "Grady",
            "suffix": [
                "IV"
            ],
            "use": "official",
            "prefix": []
        }
    ],
    "gender": "other"
}

The Patient is created with a reference to Organization/<ORGANIZATION_ID>.

Use Postman to delete this Organization <ORGANIZATION_ID>

It returns 200, indicating the Organization was deleted. It should return 409.

For an example of the correct functionality, use the http://hapi.fhir.org/create and related test pages.

Acceptance critiera:

  • Deleting a resource that is referenced in another resource returns 409 instead of.success.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.