Coder Social home page Coder Social logo

repository-service-tuf / repository-service-tuf-worker Goto Github PK

View Code? Open in Web Editor NEW
8.0 6.0 15.0 2.47 MB

Repository Service for TUF: Worker

License: MIT License

Dockerfile 0.41% Makefile 0.74% Python 95.80% Shell 2.88% Mako 0.17%
celery consumer hacktoberfest repository security service tuf worker

repository-service-tuf-worker's Introduction

Repository Service for TUF (RSTUF)

image

OpenSSF Best Practices

Repository Service for TUF (RSTUF) is a collection of components that provide services for securing content downloads from tampering between the repository and the client (for example, by an on-path attacker).

RSTUF security properties are achieved by implementing The Update Framework (TUF) as a service.

Repository Service for TUF is platform, artifact, language, and process-flow agnostic.

RSTUF simplifies the adoption of TUF by removing the need to design a repository integration -- RSTUF encapsulates that design.

Repository Service for TUF (RSTUF) is designed to be integrated with existing content delivery solutions -- at the edge or in public/private clouds --alongside current artifact production systems, such as build systems, including; Jenkins, GitHub Actions, GitLab, CircleCI, etc. RSTUF protects downloading, installing, and updating content from arbitrary content repositories, such as a web server, JFrog Artifactory, GitHub packages, etc.

If a user wants to integrate RSTUF into an existing CI/CD pipeline the only requirement is to make a REST API request to RSTUF:

image

The same can be said when a user wants to integrate RSTUF into an existing distribution platform:

image

Thanks to the REST API, integrating RSTUF into existing content delivery solutions is straightforward. Furthermore, RSTUF is designed for scalability and can support active repositories with multiple repository workers.

At present, RSTUF implements a streamlined variant of the Python Package Index (PyPI)'s PEP 458 โ€“ Secure PyPI downloads with signed repository metadata. In the future, RSTUF will grow to provide additional protections through supporting the end-to-end signing of packages, comparable to PyPI's PEP 480 โ€“ Surviving a Compromise of PyPI: End-to-end signing of packages.

How does Repository Service for TUF compare to other solutions?

Rugged: Repository Service for TUF is a collection of services to deploy a scalable and distributed TUF Repository. RSTUF provides an easy interface to integrate (the REST API) and a tool for managing the Metadata Repository (CLI).

PyPI/PEP 458: Repository Service for TUF is a generalization of the design in PEP 458 that can be integrated into a variety of content repository architectures.

image

Using

Please, check the Repository Service for TUF Guide for the instructions about deployment, using and more details.

Contributing

This git repository contains high-level documentation guides and component integrations.

Check our CONTRIBUTING.rst for more details on how to contribute.

Please, check the Repository Service for TUF Development Guide.

Questions, feedback, and suggestions are welcomed on the #repository-service-for-tuf channel on OpenSSF Slack.

repository-service-tuf-worker's People

Contributors

breakingpitt avatar dependabot[bot] avatar enyinna1234 avatar github-actions[bot] avatar juniorlpa avatar kairoaraujo avatar kapsalis avatar kauth avatar lukpueh avatar mvrachev avatar rdimitrov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

repository-service-tuf-worker's Issues

Enable the scalability for `kaprien-repo-worker`

Implement the improvement described in this suggestion:

Main goals/steps

  • RepositoryMetadata.add_targets no longer bumps the Snapshot/Timestamp, returns the targets_meta
  • Add to the RepositoryMetadata a function to publish targets meta (publish_targets_meta) that receives a list of delegated bins_names.
    • It must check if the latest snapshot already doesn't have it
  • Implement in kaprien.main specific in add_targets condition to store in KAPRIEN_REDIS_SERVER meta_names returned from RepositoryMetadata.add_targets without version (a list of unpublished metas)
    • Lock before adding to unpublished metas
  • Implement in kaprien.main new condition publish_targets_meta that will
    • Lock Snapshot/Timestamp
    • Check if it has unpublished metas and calls RepositoryMetadata. publish_targets_meta to publish
    • Flush the unpublished metas

Add a timeout for the Lock configurable

Add the Lock timeout configurable for RSTUF Workers
See self._redis.lock in repo_worker.tuf.repository*

Currently, the timeout is hardcoded and should be configurable for the deployment.
Depending on the environment and amount of targets per task/request, it can take longer than 5 seconds.

A use case is that the environment adds all targets once per day (big bach), which can take minutes to process.
The best approach is an optionally configurable tuning for the user based on their use case.

Revert KAPRIEN_STATUS_BACKEND_SERVER to KAPRIEN_REDIS_SERVER

Redis will be a required service for Kaprien for two primary purposes:

  • Backend Service, adding the results for the tasks. In the future, it is possible to give some flexibility and add a third service, but now the goal is to reduce the complexity.
  • Store the Repository Settings across multiple kaprien-repo-workers.

Latest sslib updated to the worker isn't compatible

Describe the bug

See the functional test https://github.com/vmware/repository-service-tuf/actions/runs/3329935767/jobs/5507780019

Commit mentioned: 8ef18e2

https://github.com/vmware/repository-service-tuf-worker/blob/ab37ddf8259f92ebb4eaf463d1cc4822bc910d56/requirements.txt#L27

epository-service-tuf-worker_1  | [2022-10-27 06:49:53,664: ERROR/ForkPoolWorker-8] Task app.repository_service_tuf_worker[74bb50defe804099adde7b7dd0759080] raised unexpected: TypeError("LocalStorage.put() got an unexpected keyword argument 'restrict'")
repository-service-tuf-worker_1  | Traceback (most recent call last):
repository-service-tuf-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
repository-service-tuf-worker_1  |     R = retval = fun(*args, **kwargs)
repository-service-tuf-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 734, in __protected_call__
repository-service-tuf-worker_1  |     return self.run(*args, **kwargs)
repository-service-tuf-worker_1  |   File "/opt/repository-service-tuf-worker/app.py", line 75, in repository_service_tuf_worker
repository-service-tuf-worker_1  |     result = repository_action(payload, update_state=self.update_state)
repository-service-tuf-worker_1  |   File "/opt/repository-service-tuf-worker/repository_service_tuf_worker/repository.py", line 384, in bootstrap
repository-service-tuf-worker_1  |     metadata.to_file(
repository-service-tuf-worker_1  |   File "/usr/local/lib/python3.10/site-packages/tuf/api/metadata.py", line 339, in to_file
repository-service-tuf-worker_1  |     persist_temp_file(temp_file, filename, storage_backend)
repository-service-tuf-worker_1  |   File "/usr/local/lib/python3.10/site-packages/securesystemslib/util.py", line 222, in persist_temp_file
repository-service-tuf-worker_1  |     storage_backend.put(temp_file, persist_path, restrict=restrict)
repository-service-tuf-worker_1  | TypeError: LocalStorage.put() got an unexpected keyword argument 'restrict'

Reproduction steps

  1. Update securesystemslib
  2. Deploy the 'dev' tag containers
  3. Use the latest CLI version
  4. Perform the Ceremony
  5. Bootstrap the RSTUF

Expected behavior

Compatible with the latest securesystemslib

Additional context

The Functional test needs to run for bumping the dependencies (#116)

Or before release (#117)

Bug in the unpublished-metas job triggered by Delete/Remove targets

Scenario:

Remove target files that doesn't exist in the metadata.

Behavior

The bump-target-meta job starts showing this errors

Probably the "unpublished-metas" in Redis is an empty string.

kaprien-repo-worker_1  | [2022-09-23 14:40:00,119: ERROR/ForkPoolWorker-2] Task app.kaprien_repo_worker[publish_targets_meta] raised unexpected: StorageError("Can't open Role ''")
kaprien-repo-worker_1  | Traceback (most recent call last):
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/services/storage/local.py", line 60, in get
kaprien-repo-worker_1  |     file_object = open(filename, "rb")
kaprien-repo-worker_1  | FileNotFoundError: [Errno 2] No such file or directory: '/var/opt/kaprien/storage/1..json'
kaprien-repo-worker_1  | 
kaprien-repo-worker_1  | During handling of the above exception, another exception occurred:
kaprien-repo-worker_1  | 
kaprien-repo-worker_1  | Traceback (most recent call last):
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
kaprien-repo-worker_1  |     R = retval = fun(*args, **kwargs)
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 734, in __protected_call__
kaprien-repo-worker_1  |     return self.run(*args, **kwargs)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/app.py", line 70, in kaprien_repo_worker
kaprien-repo-worker_1  |     result = repository_action()
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/repository.py", line 443, in publish_targets_meta
kaprien-repo-worker_1  |     bins_role = self._load(bins_name)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/repository.py", line 192, in _load
kaprien-repo-worker_1  |     return Metadata.from_file(role_name, None, self._storage_backend)
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/tuf/api/metadata.py", line 233, in from_file
kaprien-repo-worker_1  |     with storage_backend.get(filename) as file_obj:
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/contextlib.py", line 135, in __enter__
kaprien-repo-worker_1  |     return next(self.gen)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/services/storage/local.py", line 63, in get
kaprien-repo-worker_1  |     raise StorageError(f"Can't open Role '{role}'")
kaprien-repo-worker_1  | securesystemslib.exceptions.StorageError: Can't open Role ''
web_1                  | 172.18.0.1 - - [23/Sep/2022 14:40:44] "GET / HTTP/1.1" 200 -
web_1                  | 172.18.0.1 - - [23/Sep/2022 14:40:57] "GET /4.snapshot.json HTTP/1.1" 200 -
kaprien-repo-worker_1  | [2022-09-23 14:41:00,007: INFO/Beat] Scheduler: Sending due task publish_targets_meta (app.kaprien_repo_worker)
kaprien-repo-worker_1  | [2022-09-23 14:41:00,010: INFO/MainProcess] Task app.kaprien_repo_worker[publish_targets_meta] received
kaprien-repo-worker_1  | [2022-09-23 14:41:00,017: ERROR/ForkPoolWorker-9] Task app.kaprien_repo_worker[publish_targets_meta] raised unexpected: StorageError("Can't open Role ''")
kaprien-repo-worker_1  | Traceback (most recent call last):
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/services/storage/local.py", line 60, in get
kaprien-repo-worker_1  |     file_object = open(filename, "rb")
kaprien-repo-worker_1  | FileNotFoundError: [Errno 2] No such file or directory: '/var/opt/kaprien/storage/1..json'
kaprien-repo-worker_1  | 
kaprien-repo-worker_1  | During handling of the above exception, another exception occurred:
kaprien-repo-worker_1  | 
kaprien-repo-worker_1  | Traceback (most recent call last):
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
kaprien-repo-worker_1  |     R = retval = fun(*args, **kwargs)
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 734, in __protected_call__
kaprien-repo-worker_1  |     return self.run(*args, **kwargs)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/app.py", line 70, in kaprien_repo_worker
kaprien-repo-worker_1  |     result = repository_action()
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/repository.py", line 443, in publish_targets_meta
kaprien-repo-worker_1  |     bins_role = self._load(bins_name)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/repository.py", line 192, in _load
kaprien-repo-worker_1  |     return Metadata.from_file(role_name, None, self._storage_backend)
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/tuf/api/metadata.py", line 233, in from_file
kaprien-repo-worker_1  |     with storage_backend.get(filename) as file_obj:
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/contextlib.py", line 135, in __enter__
kaprien-repo-worker_1  |     return next(self.gen)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/services/storage/local.py", line 63, in get
kaprien-repo-worker_1  |     raise StorageError(f"Can't open Role '{role}'")
kaprien-repo-worker_1  | securesystemslib.exceptions.StorageError: Can't open Role ''
web_1                  | 172.18.0.1 - - [23/Sep/2022 14:41:21] "GET /2.bins-e.json HTTP/1.1" 200 -
web_1                  | 172.18.0.1 - - [23/Sep/2022 14:41:24] "GET /2.bins-2.json HTTP/1.1" 304 -
web_1                  | 172.18.0.1 - - [23/Sep/2022 14:41:29] "GET /2.bins-3.json HTTP/1.1" 304 -
kaprien-repo-worker_1  | [2022-09-23 14:42:00,009: INFO/Beat] Scheduler: Sending due task publish_targets_meta (app.kaprien_repo_worker)
kaprien-repo-worker_1  | [2022-09-23 14:42:00,012: INFO/MainProcess] Task app.kaprien_repo_worker[publish_targets_meta] received
kaprien-repo-worker_1  | [2022-09-23 14:42:00,016: ERROR/ForkPoolWorker-9] Task app.kaprien_repo_worker[publish_targets_meta] raised unexpected: StorageError("Can't open Role ''")
kaprien-repo-worker_1  | Traceback (most recent call last):
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/services/storage/local.py", line 60, in get
kaprien-repo-worker_1  |     file_object = open(filename, "rb")
kaprien-repo-worker_1  | FileNotFoundError: [Errno 2] No such file or directory: '/var/opt/kaprien/storage/1..json'
kaprien-repo-worker_1  | 
kaprien-repo-worker_1  | During handling of the above exception, another exception occurred:
kaprien-repo-worker_1  | 
kaprien-repo-worker_1  | Traceback (most recent call last):
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
kaprien-repo-worker_1  |     R = retval = fun(*args, **kwargs)
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 734, in __protected_call__
kaprien-repo-worker_1  |     return self.run(*args, **kwargs)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/app.py", line 70, in kaprien_repo_worker
kaprien-repo-worker_1  |     result = repository_action()
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/repository.py", line 443, in publish_targets_meta
kaprien-repo-worker_1  |     bins_role = self._load(bins_name)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/repository.py", line 192, in _load
kaprien-repo-worker_1  |     return Metadata.from_file(role_name, None, self._storage_backend)
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/tuf/api/metadata.py", line 233, in from_file
kaprien-repo-worker_1  |     with storage_backend.get(filename) as file_obj:
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/contextlib.py", line 135, in __enter__
kaprien-repo-worker_1  |     return next(self.gen)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/services/storage/local.py", line 63, in get
kaprien-repo-worker_1  |     raise StorageError(f"Can't open Role '{role}'")
kaprien-repo-worker_1  | securesystemslib.exceptions.StorageError: Can't open Role ''
kaprien-repo-worker_1  | [2022-09-23 14:43:00,008: INFO/Beat] Scheduler: Sending due task publish_targets_meta (app.kaprien_repo_worker)
kaprien-repo-worker_1  | [2022-09-23 14:43:00,010: INFO/MainProcess] Task app.kaprien_repo_worker[publish_targets_meta] received
kaprien-repo-worker_1  | [2022-09-23 14:43:00,013: ERROR/ForkPoolWorker-9] Task app.kaprien_repo_worker[publish_targets_meta] raised unexpected: StorageError("Can't open Role ''")
kaprien-repo-worker_1  | Traceback (most recent call last):
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/services/storage/local.py", line 60, in get
kaprien-repo-worker_1  |     file_object = open(filename, "rb")
kaprien-repo-worker_1  | FileNotFoundError: [Errno 2] No such file or directory: '/var/opt/kaprien/storage/1..json'
kaprien-repo-worker_1  | 
kaprien-repo-worker_1  | During handling of the above exception, another exception occurred:
kaprien-repo-worker_1  | 
kaprien-repo-worker_1  | Traceback (most recent call last):
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
kaprien-repo-worker_1  |     R = retval = fun(*args, **kwargs)
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 734, in __protected_call__
kaprien-repo-worker_1  |     return self.run(*args, **kwargs)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/app.py", line 70, in kaprien_repo_worker
kaprien-repo-worker_1  |     result = repository_action()
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/repository.py", line 443, in publish_targets_meta
kaprien-repo-worker_1  |     bins_role = self._load(bins_name)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/repository.py", line 192, in _load
kaprien-repo-worker_1  |     return Metadata.from_file(role_name, None, self._storage_backend)
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/site-packages/tuf/api/metadata.py", line 233, in from_file
kaprien-repo-worker_1  |     with storage_backend.get(filename) as file_obj:
kaprien-repo-worker_1  |   File "/usr/local/lib/python3.10/contextlib.py", line 135, in __enter__
kaprien-repo-worker_1  |     return next(self.gen)
kaprien-repo-worker_1  |   File "/opt/kaprien-repo-worker/repo_worker/services/storage/local.py", line 63, in get
kaprien-repo-worker_1  |     raise StorageError(f"Can't open Role '{role}'")
kaprien-repo-worker_1  | securesystemslib.exceptions.StorageError: Can't open Role ''

Use pinned hash GHA version in the Test Docker Build pipeline

Persist the repository settings sent by API

For every task sent by API with settings, persist these settings.
It enables the kaprien-repo-worker to start and execute the task by itself and remove the overhead of API sending the scheduled tasks.
The scheduled task can be done using Celery beat.

Implement the retention in the storage backend

The retention configuration should be part of the Repository Settings.
The default value is used when this configuration item (retention) is not found.

Suggestion

The default value is 2

[latest].{role}.json
[latest-1].{role}.json

Example

32.snpashot.json
31.snapshot.json

All old that two will be removed (cleaned)
This will be implemented in the Interface/Service.

Not bumping versions of timestamp, snapshot, bins

The user reported that timestamp, snapshot, and bins metadata expired.
Logs were shared, and it seems kaprien-repo-worker doesn't identify bootstrap and skips.

The user reported that the host machine was rebooted due OS update.

Metadata Client cannot download files

Implement two message queues

Implement two message queues, repository_metadata and kaprien_internals.

  • repository_metadata will be used for tasks sent through the Rest API
  • kaprien_internals will be used for Celery Beat Jobs

It avoids that, for some reason, repository_metadata works are busy and cannot perform internal tasks.
The kaprien_internal must have a higher priority.

It also gives flexibility where complex deployments, the admins can choose to have workers node only for internal jobs (i.e. bumping Snapshot/Timestamp and publishing new meta targets

  • Use supervisord with supervisor.conf
    • Provide /$DATA_DIR/supervisor.conf
    • Document that admins can use it to not start the Celery Beat in some nodes (requires to not have jobs)

Task: Refactor contribution content in README and CONTRIBUTING files

What is the task about?

Based on the discussions of repository-service-tuf/repository-service-tuf-cli#110, we need accordingly to refactor the contribution-related content of the README.rst and CONTRIBUTING.rst files.

Parent feature

No response

References

Code of Conduct

  • I agree to follow this project's Code of Conduct

Bug: Locks are not distributed

What happened?

It is related to the new implementation and old implementation -- better fix in the new implementation.

If a lock reaches the timeout, it doesn't work correctly.
The self._redis.lock() is not distributed (repository_service_tuf_worker.repository).

We use the lock to protect against two processes/threads operating and writing the metadata to the Storage Backend.
We need to be investigated a better-distributed lock to use with celery tasks

What steps did you take?

Add a big task/request that takes more than 5 seconds to be processed.

You will see the exception raising in the worker.

[2022-12-22 09:47:03,729: ERROR/ForkPoolWorker-9] Task app.repository_service_tuf_worker[publish_targets-ca3f382b6de44d8b91fe2bbd6ecb7926] raised unexpected: LockNotOwnedError("Cannot release a lock that's no longer owned")
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 734, in __protected_call__
    return self.run(*args, **kwargs)
  File "/opt/repository-service-tuf-worker/app.py", line 80, in repository_service_tuf_worker
    result = repository_action()
  File "/opt/repository-service-tuf-worker/repository_service_tuf_worker/repository.py", line 433, in publish_targets
    with self._redis.lock("publish_targets", timeout=5.0):
  File "/usr/local/lib/python3.10/site-packages/redis/lock.py", line 168, in __exit__
    self.release()
  File "/usr/local/lib/python3.10/site-packages/redis/lock.py", line 253, in release
    self.do_release(expected_token)
  File "/usr/local/lib/python3.10/site-packages/redis/lock.py", line 259, in do_release
    raise LockNotOwnedError("Cannot release a lock" " that's no longer owned")
redis.exceptions.LockNotOwnedError: Cannot release a lock that's no longer owned

What behavior did you expect?

Handle the locks in a distributed way.
We have multiple Workers and multi-thread operations.

Relevant log output

[2022-12-22 09:47:03,729: ERROR/ForkPoolWorker-9] Task app.repository_service_tuf_worker[publish_targets-ca3f382b6de44d8b91fe2bbd6ecb7926] raised unexpected: LockNotOwnedError("Cannot release a lock that's no longer owned")
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 451, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/celery/app/trace.py", line 734, in __protected_call__
    return self.run(*args, **kwargs)
  File "/opt/repository-service-tuf-worker/app.py", line 80, in repository_service_tuf_worker
    result = repository_action()
  File "/opt/repository-service-tuf-worker/repository_service_tuf_worker/repository.py", line 433, in publish_targets
    with self._redis.lock("publish_targets", timeout=5.0):
  File "/usr/local/lib/python3.10/site-packages/redis/lock.py", line 168, in __exit__
    self.release()
  File "/usr/local/lib/python3.10/site-packages/redis/lock.py", line 253, in release
    self.do_release(expected_token)
  File "/usr/local/lib/python3.10/site-packages/redis/lock.py", line 259, in do_release
    raise LockNotOwnedError("Cannot release a lock" " that's no longer owned")
redis.exceptions.LockNotOwnedError: Cannot release a lock that's no longer owned

Code of Conduct

  • I agree to follow this project's Code of Conduct

Use pinned hash GHA version in the Publish Docker Dev pipeline

Task: Refactor & add to the Makefile targets for testing and linting

What is the task about?

Based on the discussions of repository-service-tuf/repository-service-tuf-cli#110, we need accordingly to refactor the Makefile targets for testing and linting.
This should also include automating the way we update the dependencies in .pre-commit-config.yaml.
See the references for more information.

Parent feature

No response

References

Code of Conduct

  • I agree to follow this project's Code of Conduct

Implement target deletion method in MetadataRepository

Implement delete_target in the repository.MetadataRepository

The method steps are

  • It will receive a list of targets
  • Group the files by delegated hash bin based on the file name (same as done for add_targets)
    • Common function to be reused by both
  • Lock the TUF_BINS_HASHED
  • For all identified delegated hash bin roles:
    • Load the delegated hash bin role
    • Find the target file
    • Case one or more is found and deleted, bump the delegated hash bin role and add to an "unpublished target metas" and add to a list of found targets (for future task status)
    • Case none is found, add it to a list of not found targets (for future task status)

The unpublished target metas will publish the changed delegated hash bin roles into Snapshot metas

Use pinned hash GHA version in the CD pipeline

Call the to bump the versions (snapshot, bins) right after bootstrap

To avoid a significant delta between doing the ceremony (offline cases) and the bootstrap, implement the version bump right away after initiating the bootstrap.

The current behavior

The current behavior is if an RSTUF administrator generates the Ceremony offline and decides to bootstrap with the payload containing expired online metadata (Snapshot, Timestamp, Targets, and BINS), the metadata will be updated/bumped after the first automatic check (every 10 minutes).
If new targets are added in this interval, for example, it can trigger the update/bump.
Not critical, but we could update/bump the expired ones during the bootstrap process.

Below you can see the bootstrap

repository-service-tuf-worker_1  | [2023-02-15 16:43:13,241: INFO/MainProcess] Task app.repository_service_tuf_worker[b858518ad9714f5a8b7ec57d552cab96] received
repository-service-tuf-api_1     | INFO:     192.168.112.1:56266 - "POST /api/v1/bootstrap/ HTTP/1.1" 202 Accepted
repository-service-tuf-worker_1  | [2023-02-15 16:43:13,622: INFO/ForkPoolWorker-8] Task app.repository_service_tuf_worker[b858518ad9714f5a8b7ec57d552cab96] succeeded in 0.3773803189396858s: {'status': 'Task finished.', 'details': {'bootstrap': True}, 'last_update': datetime.datetime(2023, 2, 15, 16, 43, 13, 621116)}

Then, ~7 minutes later, you can see the automatic bump.
It means that between Bootstrap and the automatic update/bump, we have an 8 minutes window.

repository-service-tuf-worker_1  | [2023-02-15 16:50:00,096: INFO/MainProcess] Task app.repository_service_tuf_worker[bump_online_roles] received
repository-service-tuf-worker_1  | [2023-02-15 16:50:00,329: INFO/ForkPoolWorker-9] [scheduled snapshot bump] Snapshot version bumped: 3
repository-service-tuf-worker_1  | [2023-02-15 16:50:00,329: INFO/ForkPoolWorker-9] [scheduled snapshot bump] Timestamp version bumped: 3, new expire 2023-02-16 16:50:00
repository-service-tuf-worker_1  | [2023-02-15 16:50:00,329: ERROR/ForkPoolWorker-9] bin not found, not bumping.
repository-service-tuf-worker_1  | [2023-02-15 16:50:00,331: INFO/ForkPoolWorker-9] Task app.repository_service_tuf_worker[bump_online_roles] succeeded in 0.2347048280062154s: True

How to reproduce

  1. make run-dev
  2. Go to http://127.0.0.1
  3. Authenticate
  4. Go to Post /api/v1/bootstrap
  5. Try out
  6. Execute
  7. Look the logs

Create new state for adding/deleting targets

When a target is added or deleted (TBD #54), the first status for the task is SUCCESS, but it wasn't published yet.
The target is published in the job publish_targets_meta.

The state should be ADDED (unpublished but added to delegated hash bin) -> SUCCESS (target is available in the signed metadata).

It makes explicit for third parties that will do a callback.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.