Coder Social home page Coder Social logo

fedora-copr / copr Goto Github PK

View Code? Open in Web Editor NEW
114.0 114.0 58.0 91.24 MB

RPM build system - upstream for https://copr.fedorainfracloud.org/

Python 69.72% Shell 8.22% Dockerfile 0.38% Makefile 0.81% Roff 0.01% Mako 0.01% CSS 0.67% JavaScript 8.64% HTML 10.85% Gherkin 0.06% Jinja 0.64%

copr's People

Contributors

asamalik avatar bkabrda avatar clime avatar conan-kudo avatar dependabot[bot] avatar dturecek avatar frostyx avatar hojang2 avatar hrnciar avatar hroncok avatar ignatenkobrain avatar immanetize avatar lbarcziova avatar mavit avatar mayt34 avatar mfocko avatar mizdebsk avatar msrb avatar nikromen avatar pjunak avatar pkking avatar praiskup avatar pypingou avatar ralphbean avatar schlupov avatar seocam avatar skvidal avatar sorki avatar tommylike avatar xsuchy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

copr's Issues

copr-cli has no way to list builds of a project

Original issue: https://pagure.io/copr/copr/issue/74
Opened: 2017-06-01 23:33:08
Opened by: firstyear

I would like to ask for a command to list the build ID's of a project.

The motivation is that I want to clean old builds on my copr repo - to do this I need a list of the ID's to pass to delete-build', but there is no cli option to currently do this.

Thanks,


clime commented at 2017-06-02 07:42:11:

That's correct. It is however possible to list the builds for a package:

copr-cli get-package @copr/copr --name copr-frontend --with-all-builds

You need to json-parse that though (e.g. there is jq in bash but it's a bit difficult tool at first).

Anyway, the builds are also deleted automatically from copr-backend (older than 14 days builds that are not containing the latest package version are deleted every night). What's left on frontend are then just db records that might serve as a log.


firstyear commented at 2017-06-02 07:52:13:

Yeah, it's these old build records that I want to purge. It's good to know there is a way to list this, but would be possible to amke the command more accessible/friendlier?


clime commented at 2017-06-02 08:15:52:

Well, what exactly do you mean by more accessible or friendlier? People might be interested in more than just id of a build so that's why the information is quite dense. More human readable output (not just json) would be nice but I think it's quite readable for a human too. Othewise the command might be hard to find but I don't think we can do much about that.


firstyear commented at 2017-06-02 08:20:05:

copr-cli list-builds <project name> 
...
<id>   <package name> ....

Would be pretty cool. Similar to what the web ui shows on the "builds" page, but in cli form.


dturecek commented at 2020-07-20 12:50:03:

Modified in #1443.


frostyx commented at 2020-07-27 11:38:06:

Commit 3f45b32f fixes this issue


praiskup commented at 2020-07-28 09:09:41:

We are getting to an inconsistent state here; e.g. list-packages returns
json output, while list-builds returns tab-separated fields. I think we should
keep this consistent, especially when "name" of the package isn't known:

1489032 dummy-pkg       failed
1485239 None    canceled
1465211 dummy-pkg       succeeded

Is the 1485239 package name None, or copr doesn't know the name?
I just filled somewhat relevant #1448.


praiskup commented at 2020-10-05 11:20:52:

I am closing this one as the RFE was implemented, but I filled #1532 with the remaining bug.

copr edit-package-tito nulls out fields not edited

Original issue: https://pagure.io/copr/copr/issue/67
Opened: 2017-05-19 11:28:01
Opened by: brianjmurrell

If I have a project defined as:

{
        "copr_id": 14105,
        "enable_net": false,
        "id": 218288,
        "name": "django-picklefield",
        "old_status": null,
        "source_json": "{\"tito_test\": false, \"git_dir\": \"django-picklefield\", \"git_url\": \"https://github.com/intel-hpdd/manager-for-lustre-dependencies.git\", \"git_branch\": \"\"}",
        "source_type": "git_and_tito",
        "webhook_rebuild": true
}

and I run copr edit-package-tito --name django-picklefield --git-url https://github.com/intel-hpdd/manager-for-lustre-dependencies.git --git-branch master manager-for-lustre on it, it (a) requires the --git-url (which is should not as that property already exists) and (b) touches more than just the git branch property (see the webhook_rebuild property). Here is the resulting package configuration:

{
        "copr_id": 14105,
        "enable_net": false,
        "id": 218288,
        "name": "django-picklefield",
        "old_status": null,
        "source_json": "{\"tito_test\": false, \"git_dir\": \"\", \"git_url\": \"https://github.com/intel-hpdd/manager-for-lustre-dependencies.git\", \"git_branch\": \"master\"}",
        "source_type": "git_and_tito",
        "webhook_rebuild": false
}

Notice how much of that package's properties were changed by an edit of just one field!


clime commented at 2017-06-05 09:33:08:

The title of this issue says: "copr edit-package-tito nulls out fields not edited" but I cannot see anything like that in the provided example.

What I can see though is that the webhook_rebuild property was changed (from True to False) when it should not have been. I'll look into it. Thank you!

As for required/non-required property, well, it's comes from how code works. It does not actually consider what has already been set and what has not. I mean, it would probably be possible but maybe a little bit complicated.


clime commented at 2017-06-05 13:25:21:

The webhook_rebuild reset is now fixed in https://pagure.io/copr/copr/c/a547d69e514249d7b17a4f5e192042613235e556?branch=master.

However, this problem remains e.g. for git-dir switch and in general for all switches that specify source_json field of a package (you will see it if you do copr-cli get-package somecopr --name somepackage). You need to specify them all when editing a package, their current values are not taken into account when you edit a package. I am not sure if these properties are also subject of this issue as the example shows change only on webhook_rebuild property.


clime commented at 2017-06-09 09:07:14:

Feel free to reopen or file a new issue against the interface regarding the 'source_json' attributes.

Builds triggered by GitHub WebHook (tag event) do not enable Internet during build

Original issue: https://pagure.io/copr/copr/issue/55
Opened: 2017-04-05 23:42:47
Opened by: bostrt

I have "Enable internet access during builds" enabled on my project but when a build is triggered by a WebHook tag even on my GitHub project, internet is not used. I have to resubmit the build with Internet enabled.


clime commented at 2017-05-04 17:54:27:

Should be fixed by commit ff1117b (needs testing). Thank you for the report!

Building SRPMs on builder

Original issue: https://pagure.io/copr/copr/issue/68
Opened: 2017-05-19 13:14:12
Opened by: praiskup

Seeing https://pagure.io/copr/copr/c/88806245814ef3dc59a7510faaa6c078ea52a792?branch=master, I must give +1 for this idea. Builder really is natural place to build srpms.

Though note that "pushing" the changes back to dist-git is totally insecure. It is pretty trivial to escape from chroot (by installing hacked RPM into minimal buildroot) and gain root access on builder -- so granting "push access" to dist-git (be that via client cert or whatever) would mean that any hacked builder might completely destroy dist-git machine.

The correct way would be to (1) build SRPM on builder, and (2) import that srpm into dist-git on dist-git.


praiskup commented at 2017-05-19 15:24:43:

The correct way would be to (1) build SRPM on builder, and (2) import that srpm into dist-git on dist-git.

Note that this becomes pretty trivial anyways with copr-builder package (we can hack on copr-builder or add additional script for Tito, mock-scm, ... wokflows). Though if we want to reuse the actual builders and not to have separate set of VMs -- we need to implement the "build srpm" requests on backend probably ....


clime commented at 2017-05-19 15:58:15:

I don't know what you mean by "completely destroy dist-git machine". You could start uploading like crazy, right. That would bring dist-git down. There is an alternative to push the built srpm from backend to dist-git after it has been downloaded from a builder but we certainly don't want to do the same thing (building srpm) at two places. The copr-dist-git should serve as "build history" so that we are able to reproduce a build with sources as they were when the build was submitted. It might not be an infinite history but still at least some history.


clime commented at 2017-05-24 10:33:17:

Note that I updated the change proposal (https://pagure.io/copr/copr/c/f5ab70925b12391cadd83ca8d0c36d4e5e6c3ae7) also to address the mentioned problem.


praiskup commented at 2017-05-24 21:54:01:

LGTM, thanks!


praiskup commented at 2017-05-30 06:18:59:

Reading the brainstorming again:

* open copr-dist-git for user-interaction
* do not store built srpm there anymore, instead build them on builders based on db data (git/svn hashes, gem/pip package versions)
* potentially use copr-cli for building on builders by employing (--local --detached) switches **

Can we reconsider the second point? It is actually super cool to have the "state" of actually built RPMS baked into dist-git. It will look very weird if some RPM is in repository, and nothing is in dist-git.

The dist-git machine as is become one of the best things on copr -- we simply upload SRPM and do the build; and because the srpm is completely committed to dist-git (and lookaside) everyone can fully review what changed in RPMS (cgit allows showing git diff directly) and most importantly, the we can always rebuild from dist-git only --- so as long as the SRPM is committed, we are completely independent from the outside world.

From brainstorming.rst:

** actually scratch the last point. What we need is a builder script that gets just task_id as input and downloads the build definition
from frontend and based on that executes the whole build. The current copr-builder fixes some parts of build definition (most importantly
the dist-git source args) on its command line which makes future changes like in point 2 basically a no-go. We need to keep our options
open in this regard.

For this reason, not-storing the SRPM into dist-git is actually no-go from my POV, cleardownvote.
Building against build-id is not a bad idea, but ensure the downloaded contents of github or tito, or mock-scm, pypi .... is always fully committed to dist-git ....


praiskup commented at 2017-05-30 06:24:22:

Note that I told you that moving srpm build to builder should be done (like ~3 years ago) so I agree form long term POV.... But with trivial patch as in https://pagure.io/copr/copr/pull-request/70 it is mostly micro-optimization (PR 70 makes the huge performance gain with this regard). There's no point to do such quick architecture changes headlong overnight ... (edit: I'd be glad to discuss such changes)

And if moving SRPM build to copr-builder side means "no commits in dist-git", I totally against.


clime commented at 2017-05-30 13:32:25:

Reading the brainstorming again:

  • open copr-dist-git for user-interaction
  • do not store built srpm there anymore, instead build them on builders based on db data (git/svn hashes, gem/pip package versions)
  • potentially use copr-cli for building on builders by employing (--local --detached) switches **

Can we reconsider the second point? It is actually super cool to have the "state" of actually built RPMS baked into dist-git. It will look very weird if some RPM is in repository, and nothing is in dist-git.
The dist-git machine as is become one of the best things on copr -- we simply upload SRPM and do the build; and because the srpm is completely committed to dist-git (and lookaside) everyone can fully review what changed in RPMS (cgit allows showing git diff directly) and most importantly, the we can always rebuild from dist-git only --- so as long as the SRPM is committed, we are completely independent from the outside world.
From brainstorming.rst:
** actually scratch the last point. What we need is a builder script that gets just task_id as input and downloads the build definition
from frontend and based on that executes the whole build. The current copr-builder fixes some parts of build definition (most importantly
the dist-git source args) on its command line which makes future changes like in point 2 basically a no-go. We need to keep our options
open in this regard.

For this reason, not-storing the SRPM into dist-git is actually no-go from my POV, cleardownvote.
Building against build-id is not a bad idea, but ensure the downloaded contents of github or tito, or mock-scm, pypi .... is always fully committed to dist-git ....

Note that if the repository is maintained elsewhere (Github, Gitlab, Pagure) and it has a reliable history, we don't need to duplicate data from there from whatever fabricated purpose.

The only reason to duplicate the data (i.e. transform them into typical tarball+spec pair) would be if someone wants to add patches upon the upstream repo. That use-case will be supported.

We are dependent on these services by definition (for the GIT source) and to maintain our own history that is a copy of the original history is just a huge waste of resources. For each build, we make a new tarball and store it in distgit and this will just continue to eat more and more space and becomes even a bigger maintenance nightmare. To use the incremental history available in the original repository by default simply makes sense.


clime commented at 2017-05-30 14:04:17:

Note that I told you that moving srpm build to builder should be done (like ~3 years ago) so I agree form long term POV.... But with trivial patch as in https://pagure.io/copr/copr/pull-request/70 it is mostly micro-optimization (PR 70 makes the huge performance gain with this regard). There's no point to do such quick architecture changes headlong overnight ... (edit: I'd be glad to discuss such changes)
And if moving SRPM build to copr-builder side means "no commits in dist-git", I totally against.

We know each other 1,5 years so that's about the time I know about that idea. The PR#70 nicely targets one problem we have (had) and that is importing sources many times when 1 times was enough. The point 2 in the proposal says that we don't need to import certain kind of sources at all unless user explicitly asks for it (and that's because he probably wants to make downstream changes upon them and build test packages). The benefit is that we don't need to maintain huge amount of data that nobody really cares about.

You requested Bug 1427431 - [RFE] dist-git: policy for garbage collecting of lookaside cache tarballs at https://bugzilla.redhat.com/show_bug.cgi?id=1427431. If the dist-git history should be reliable and serve as a source for rebuilding, we cannot very well do this. If we did, the rebuild source would be lost in case the garbage-collection took them away and they would need to be re-imported into dist-git manually and probably into a new temporary repo before user could do his or her rebuild.

Also note that we even can't garbage-collect the data when dist-git becomes open for user-interaction.


praiskup commented at 2017-06-01 09:29:07:

Sorry for the delay

Note that if the repository is maintained elsewhere (Github, Gitlab, Pagure) and it has a reliable history, we don't need to duplicate data from there from whatever fabricated purpose.

Not at all, unless you plan to duplicate the original git repository in copr's dist-git too. Depending on remote git storage would make copr much weaker architecture.

The other reason is the DoS/DDoS aspect I noted in PR 70. If we rely on remote git repo or anything else, we'll forever DoS upstreams (for each build and chroot we'll clone the remote reposiotry, which is terribly unfriendly). It is not a problem now, but once some upstream decides to blacklist copr, it will be too late.

IMHO, this makes the movement no-go. Just please keep the actual architecture, my honest opinion.


praiskup commented at 2017-06-01 09:31:13:

Also note that the proposed architecture keeps the "race condition" discussed in PR 70 (each chroot clones the remote repo in different time, which means that completely different code might be built).


clime commented at 2017-06-01 09:54:55:

Sorry for the delay

Note that if the repository is maintained elsewhere (Github, Gitlab, Pagure) and it has a reliable history, we don't need to duplicate data from there from whatever fabricated purpose.

Not at all, unless you plan to duplicate the original git repository in copr's dist-git too. Depending on remote git storage would make copr much weaker architecture.

I don't understand a thing you say here. We depend on remote git storage by definition because we retrieve new sources from there and COPR is mainly system for CI - that is for building new sources.

The other reason is the DoS/DDoS aspect I noted in PR 70. If we rely on remote git repo or anything else, we'll forever DoS upstreams (for each build and chroot we'll clone the remote reposiotry, which is terribly unfriendly). It is not a problem now, but once some upstream decides to blacklist copr, it will be too late.

I mean, the remote side should be able handle to 10 consequent downloads. These systems are used by many users and they need to be able to handle this. Anyway this can be easily optimized by proxying the download.

IMHO, this makes the movement no-go. Just please keep the actual architecture, my honest opinion.

That's nice. But I remember you said something about opening dist-git several times. It would be nice if you were happy for a good change once in a time.


clime commented at 2017-06-01 10:00:47:

Also note that the proposed architecture keeps the "race condition" discussed in PR 70 (each chroot clones the remote repo in different time, which means that completely different code might be built).

There is no race condition there. It might happen that different code is built on different builders but that is because git commit hash was not used for build specification (that is currently not possible in COPR so it's not a user mistake), instead HEAD is taken, which is a moving target. Again, this can be solved by proxying download through one node or more simply, just asking first the remote side what is HEAD before continuing.


praiskup commented at 2017-06-01 10:11:41:

This is difficult and time consuming discussion, let's have a in person chat in the office tomorrow? Maybe we can have a chat with mirek too, so more heads are in room..

I don't understand a thing you say here. We depend on remote git storage by definition because we retrieve new sources from there and COPR is mainly system for CI - that is for building new sources.

For the initial build. But upstream repos are born and die all the time ... once the package has been built, we need to have full source (at least for the last built package). We can not depend on remote repos...

I mean, the remote side should be able handle to 10 consequent downloads. These systems are used by many users and they need to be able to handle this. Anyway this can be easily optimized by proxying the download.

I think we should do it the right way -> there's one request, which should download to source once -> and also deterministically built the same output for all chroots. Doing it "wrong way" and work-around it by proxy is IMO too expensive approach.

That's nice. But I remember you said something about opening dist-git several times. It would be nice if you were happy for a good change once in a time.

Correct, I'm all for opening dist-git for direct writing by users. That's IMO very far target now, but still .. it is something we agree on. But what I'm saying here in #68 doesn't go against this.


clime commented at 2017-06-01 10:21:41:

This is difficult and time consuming discussion, let's have a in person chat in the office tomorrow? Maybe we can have a chat with mirek too, so more heads are in room..

There is not reason not discuss it here. You close information for other people by chatting off-side

For the initial build. But upstream repos are born and die all the time ... once the package has been built, we need to have full source (at least for the last built package). We can not depend on remote repos...

Again we depend on them already.

If some remote repo dies, probably the user won't be interested in building the sources again? I mean that's the point. For these sources, usually the people that are building (or setting up webhooks etc.) are the same people that develop the software. You are obviously missing this point.

I think we should do it the right way -> there's one request, which should download to source once -> and also deterministically built the same output for all chroots. Doing it "wrong way" and work-around it by proxy is IMO too expensive approach.

There is no right answer to everything.

Correct, I'm all for opening dist-git for direct writing by users. That's IMO very far target now, but still .. it is something we agree on. But what I'm saying here in #68 doesn't go against this.

Well, it kind of goes against it. We don't really want to auto-import something into open user's repository that he or her maintains with care.


praiskup commented at 2017-06-01 10:30:50:

Also this is closely related to #60. There were very good points raised by our users: Building of RPM from "upstream" repos is two phase process (a) get the source from upstream, and (b) build the package from source. It is worth having the (a) done once, deterministically and have it backed up in copr dist-git.


praiskup commented at 2017-06-01 10:41:15:

Again we depend on them already.

Again -- only initially (at the time we know that the remote source exists).

If some remote repo dies, probably the user won't be interested in building
the sources again?

Why do you think so?

I mean that's the point. For these sources, usually the people that are
building (or setting up webhooks etc.) are the same people that develop the
software. You are obviously missing this point.

Not really, maintaining packaging "CI" doesn't mean I have to be upstream
maintainer. That's the point -- fedora maintainers should be motivated to setup
copr-ci-packaging-workflow even though they can't commit to upstream.

There is no right answer to everything.

Having it in dist-git as it is not is clear answer. Let me reverse the discussion -- what's the motivation to change the architecture?

Correct, I'm all for opening dist-git for direct writing by users. That's IMO
very far target now, but still .. it is something we agree on. But what I'm
saying here in #68 doesn't go against this.

Well, it kind of goes against it. We don't really want to auto-import
something into open user's repository that he or her maintains with care.

Why? That's exactly the point ... if you allow users to commit something into
dist-git direcetly, than you want to have other methods (srpm upload or tito
build) also reflected ... otherwise the dist-git tracks just small part of the
package story...


praiskup commented at 2017-06-01 10:41:20:

Again we depend on them already.

Again -- only initially (at the time we know that the remote source exists).

If some remote repo dies, probably the user won't be interested in building
the sources again?

Why do you think so?

I mean that's the point. For these sources, usually the people that are
building (or setting up webhooks etc.) are the same people that develop the
software. You are obviously missing this point.

Not really, maintaining packaging "CI" doesn't mean I have to be upstream
maintainer. That's the point -- fedora maintainers should be motivated to setup
copr-ci-packaging-workflow even though they can't commit to upstream.

There is no right answer to everything.

Having it in dist-git as it is not is clear answer. Let me reverse the discussion -- what's the motivation to change the architecture?

Correct, I'm all for opening dist-git for direct writing by users. That's IMO
very far target now, but still .. it is something we agree on. But what I'm
saying here in #68 doesn't go against this.

Well, it kind of goes against it. We don't really want to auto-import
something into open user's repository that he or her maintains with care.

Why? That's exactly the point ... if you allow users to commit something into
dist-git direcetly, than you want to have other methods (srpm upload or tito
build) also reflected ... otherwise the dist-git tracks just small part of the
package story...


praiskup commented at 2017-06-01 10:48:17:

There is not reason not discuss it here. You close information for other people by chatting off-side

Pagure strikes again ... and obviously nobody cares enough ATM to discuss it here and now, there's no reason to waste our time here when it is much more convenient to have in person meeting.
I'll happily dump the meeting consensus here.

I see that proposal here is clear step backwards (on several fronts..) with no additional value. So perhaps personally you'll have more chances to convince me and "defeat" this architecture change.


clime commented at 2017-06-01 10:48:46:

Also this is closely related to #60. There were very good points raised by our users: Building of RPM from "upstream" repos is two phase process (a) get the source from upstream, and (b) build the package from source. It is worth having the (a) done once, deterministically and have it backed up in copr dist-git.

(a) done once, deterministically

Yes, this can be done.

have it backed up in copr dist-git.

We are not a back-up service for Github. See https://en.wikipedia.org/wiki/Separation_of_concerns.


praiskup commented at 2017-06-01 10:56:46:

So far points to be discussed in personal meeting

  • what's the motivation for the change the actual architecture
  • how do we solve the race condition expected to solved once and forever by PR #70 with new architecture (this needs to be done for all build options, also the future options discussed in #60)
  • how do we solve the DDoS issue (see #70) in new architecture
  • manual and automatic changes in RPMs (done by direct dist-git push vs. srpm upload e.g.) needs to be tracked somewhere ... where?
  • some users have like 1.5 Gigs SRPMs, can we ensure that uploading the SRPM and extracting is done just once in the new architecture?
  • based on the discussion, we try to solve the storage issue on dist-git side (possible alternative is rhbz#1427431).
  • why do we need perl rewrite of copr-builder?

Please let me know if there are other topics to be discussed.


clime commented at 2017-06-01 11:02:20:

Again we depend on them already.

Again -- only initially (at the time we know that the remote source exists).

Yes, the "initially" the users actually care about.

If some remote repo dies, probably the user won't be interested in building
the sources again?

Why do you think so?

Because the people building are devs at the same time.

I mean that's the point. For these sources, usually the people that are
building (or setting up webhooks etc.) are the same people that develop the
software. You are obviously missing this point.

Not really, maintaining packaging "CI" doesn't mean I have to be upstream
maintainer. That's the point -- fedora maintainers should be motivated to setup
copr-ci-packaging-workflow even though they can't commit to upstream.

Yes, they can when copr-dis-git is open (or there is also fork repo feature in the upstream, not sure if you have heard of it).

There is no right answer to everything.

Having it in dist-git as it is not is clear answer.

Not getting this sentence.

Let me reverse the discussion -- what's the motivation to change the architecture?

E.g 1.2T of data on copr-dist-git machine most of which is just garbage and which is being backed-up by infrastrustructure rsync scripts, which brings the whole machine down at quite short intervals? And that continues to grow at least 1-2GB per day at current rate. Not only it brings copr-dist-git down, it also impacts other parts of infrastructure, which are being backed-up from the same source. Do you know how long it takes to restore selinux contexts on such huge amount of data?

Correct, I'm all for opening dist-git for direct writing by users. That's IMO
very far target now, but still .. it is something we agree on. But what I'm
saying here in #68 doesn't go against this.

Well, it kind of goes against it. We don't really want to auto-import
something into open user's repository that he or her maintains with care.

Why? That's exactly the point ... if you allow users to commit something into
dist-git direcetly, than you want to have other methods (srpm upload or tito
build) also reflected ... otherwise the dist-git tracks just small part of the
package story...

"other methods" means that the repository will be different and if not, you don't want to mess up with user repo unless user explicitly asks for it. You can read about this principle here: https://en.wikipedia.org/wiki/Principle_of_least_astonishment


clime commented at 2017-06-01 11:09:25:

There is not reason not discuss it here. You close information for other people by chatting off-side

Pagure strikes again ... and obviously nobody cares enough ATM to discuss it here and now, there's no reason to waste our time here when it is much more convenient to have in person meeting.
I'll happily dump the meeting consensus here.

That is not an 'open' approach. Potential readers will not get the required context.

I see that proposal here is clear step backwards (on several fronts..) with no additional value. So perhaps personally you'll have more chances to convince me and "defeat" this architecture change.

The value is that COPR maintainers will not need to spend nights on fixing machine that is broken because it is being flooded with tons of data that no-one really needs for anything.


clime commented at 2017-06-01 11:15:42:

So far points to be discussed in personal meeting

what's the motivation for the change the actual architecture
how do we solve the race condition expected to solved once and forever by PR #70 with new architecture (this needs to be done for all build options, also the future options discussed in #60)
how do we solve the DDoS issue (see #70) in new architecture
manual and automatic changes in RPMs (done by direct dist-git push vs. srpm upload e.g.) needs to be tracked somewhere ... where?
some users have like 1.5 Gigs SRPMs, can we ensure that uploading the SRPM and extracting is done just once in the new architecture?

Please let me know if there are other topics to be discussed.

I think we have already discussed most of these things. But if you want some recap, I am ok with it.


praiskup commented at 2017-06-01 11:20:40:

Yes, the "initially" the users actually care about.

Please provide stats before doing such statemenst... I'm copr user too, and 'cgit' provided by copr is one of the most valuable features for me ... I always check what changed between two consequent SRPM uploads..

Because the people building are devs at the same time.

Again, statistics? How many fedora packages are maintained by really upstream developers?

@clime: There is no right answer to everything.
@praiskup: Having it in dist-git as it is not is clear answer.
@clime: Not getting this sentence.

Sorry :(, I wanted to say: Having everything as is now is the clear and ultimate answer for all the problems you mentioned so far....

E.g 1.2T of data on copr-dist-git machine most of which is just garbage ...

... OK, I got this. But that's what's the bug 1427431 about -> let users define (opt-in) what is and
what is not important for them. You can also have some mandatory "cleanup"
garbage collector for pathollogically demanding users ... Just please don't
hurt everybody (your proposed changes will unconditionally hurt my work-flows,
for example).

Potential readers will not get the required context.

I can/you can happily answer to them later, also we can have "open" bluejeans
chat if there are some other concerned parties... if there is somebody, please
raise your voice now!

Also, you "defeat" open discussion - but even to me it is
completely unclear what motivates you to change correct architecture into
something which is wrong in basics. Also, you push unrevieved architecture changes all
the time..

I think we have already discussed most of these things. But if you want some recap, I am ok with it.

Neither point here is answered for me to be honest, so thank you for discussing this first before complete movement.


clime commented at 2017-06-01 11:40:54:

Yes, the "initially" the users actually care about.

Please provide stats before doing such statemenst... I'm copr user too, and 'cgit' provided by copr is one of the most valuable features for me ... I always check what changed between two consequent SRPM uploads..

I am talking here about people who build directly from GIT. I won't be collecting stats for you. Find me a guy who builds from GIT and who is not a dev of that project or for whom it is easier to use copr-dist-git to do changes upon sources instead of forking it in Github, Gitlab, or Pagure.

Because the people building are devs at the same time.

Again, statistics? How many fedora packages are maintained by really upstream developers?

Fedora packagers use Fedora DistGit, which is being open and we will provide the same option. What we are talking about here is not a packager use-case though. It's a dev's use-case who wants to build his or her software for testing or wants to provide stable release for a user.

@clime: There is no right answer to everything.
@praiskup: Having it in dist-git as it is not is clear answer.
@clime: Not getting this sentence.

Sorry :(, I wanted to say: Having everything as is now is the clear and ultimate answer for all the problems you mentioned so far....

No, it is not. See the problems in the post that you comment on.

E.g 1.2T of data on copr-dist-git machine most of which is just garbage ...

... OK, I got this. But that's what's the bug 1427431 about -> let users define (opt-in) what is and
what is not important for them. You can also have some mandatory "cleanup"
garbage collector for pathollogically demanding users ... Just please don't
hurt everybody (your proposed changes will unconditionally hurt my work-flows,
for example).

Please, code this so that it works reliably and I will give you my hat...maybe.

Potential readers will not get the required context.

I can/you can happily answer to them later, also we can have "open" bluejeans
chat if there are some other concerned parties... if there is somebody, please
raise your voice now!

People can stumble upon this later in time.

Also, you "defeat" open discussion - but even to me it is
completely unclear what motivates you to change correct architecture into
something which is wrong in basics. Also, you push unrevieved architecture changes all
the time..

Well, I pushed some changes earlier that I didn't properly discuss when it was probably needed, I admit that. But now I am discussing things and talking about them. Note that what I have done so far regarding this issue is pretty much equivalent from "arch" point of view. It just offers more options.

It's correct in basics and I can argument it out anytime with anyone.

I think we have already discussed most of these things. But if you want some recap, I am ok with it.

Neither point here is answered for me to be honest, so thank you for discussing this first before complete movement.

Then, it means you ignore the points that I am giving you. There is no such thing as sudden complete movement here. It will require step-by-step approach over time.


praiskup commented at 2017-06-01 12:54:32:

I'm not ignoring you, at least not intentionally ... I deal with large SRPMs and repeated clones from upstream all the time in Internal Copr, and I'm really concerned in this topic; motivated enough to work on this in #70 and even more...

So yeah, the only problem we try to deal with is the storage issue on dist-git, right?

Please, code this so that it works reliably and I will give you my hat...maybe.

'giving my hat' == 'agree with you'? ? I can't find online definition for this term ...
but I agree that I can hack on frontend <-> dist-git protocol (e.g. garbage collector, there's a lot of things we can cleanup without loosing important parts; indeed) ... But I can not hack on uncertain PoC which wont be accepted in the end; so I need discussion in advance. And also "reliably" is a bit strong requirement :), nothing is really implemented reilabily till we test it and fix the major bugs with several iterations ... :)


clime commented at 2017-09-08 12:11:01:

SRPMs are now being built on builders. Simple importing logic was kept on copr-dist-git to avoid pushes that would need to be authenticated.

Docs: the "HowTo enable copr" instructions fail

Original issue: https://pagure.io/copr/copr/issue/42
Opened: 2017-03-05 11:07:42
Opened by: taw

https://docs.pagure.org/copr.copr/how_to_enable_repo.html#how-to-enable-repo

This part here is incorrect because no yum-plugin-copr rpm exist. I have manually created a repo file for EL7 boxes, but... it would be nice to have a copr enablement mechanism that worked, but in the mean time it would probably be even better if we had a documented processes that didn't fail. :)

or if you have older distribution:

# yum copr enable user/project
you need to have yum-plugin-copr installed

taw commented at 2017-03-05 12:28:25:

Related: https://bugzilla.redhat.com/show_bug.cgi?id=1319685


clime commented at 2017-03-07 10:38:27:

Hello, thank you for this report. Please, see https://bugzilla.redhat.com/show_bug.cgi?id=1429831.


msuchy commented at 2017-05-18 09:38:02:

Copr plugin will be available in next minor release of RHEL in yum-utils.

[RFE] Show number of failed/succeded builds in the status of build

Original issue: https://pagure.io/copr/copr/issue/53
Opened: 2017-03-30 11:36:27
Opened by: pvomacka

When I'm building package for more architectures and one of them fails it shows overall status of the build (in the table on the "Builds" tab) as "Failed". Also on the overview page is written that build failed.
That is actually not really userfriendly for anyone who wants to see the real status of the build. The failed architecture is usually the one which is not so important, but because the build is marked as "Failed" lot of users may overlook it even if the build did not failed on architecture which user is interested in.

Proposed solution:
Show the number of builds (architectures) which failed and number of all builds (architectures).
ie. I'm buidling package 'xyz' on x86_64, ppc64le and i386. Build fails on ppc64le. Then instead of just Failed status, there could be something like '2 of 3 build(s) Succeded' or '1 of 3 build(s) Failed'.


clime commented at 2017-04-04 17:57:57:

Well, this was discussed several times before. See the links at the bottom.

The problem is that we have also other statuses than just "failed", "succeeded".

We also have have "running", "importing", "starting", "pending", "canceled", etc.

and note that "canceled" state is also terminal so it should probably be included in the final counts.

The current solution is simple and that's good even though I understand it can be a bit tough to get around it.


pvomacka commented at 2017-04-05 18:16:17:

Thank you for comment and links to BZs.

I think that there could be change only in terminal states (the statuses of progress could have some priority and be shown according to its priority - only one status for more chroots even if chroots have different statuses). I did not realized that there is third terminal state ('Cancelled'). Anyway, is it even possible to cancel building on only one chroot in one build job? If it is not, then would it be possible to show "Cancelled" every time the build is cancelled? Only when build is not cancelled, then the number of failed/succeeded builds can be shown.

That could be helpful mainly for people who wants to use the copr repository. I know that current solution is simple, but really confusing and not user-friendly, because in several situation it seems that build fails and there is no functional build even if there are more functional builds than those which failed.

But as I see in BZ, unfortunately all similar proposals were closed as wontfix. Is there any chance that this will be implemented sometime in the future?


clime commented at 2017-04-05 20:41:44:

Thank you for comment and links to BZs.
I think that there could be change only in terminal states (the statuses of progress could have some priority and be shown according to its priority - only one status for more chroots even if chroots have different statuses). I did not realized that there is third terminal state ('Cancelled'). Anyway, is it even possible to cancel building on only one chroot in one build job? If it is not, then would it be possible to show "Cancelled" every time the build is cancelled? Only when build is not cancelled, then the number of failed/succeeded builds can be shown.

Currently, the behavior is that if a build is canceled, then every chroot is marked as canceled, even it if it was successful. This is probably not what we want, however. Only chroots that were still processing when a build was canceled should be marked as such. Nobody complained about it yet but we might fix that at some point anyway.

That could be helpful mainly for people who wants to use the copr repository. I know that current solution is simple, but really confusing and not user-friendly, because in several situation it seems that build fails and there is no functional build even if there are more functional builds than those which failed.

It's true it can be a bit confusing but it's something we decided to go with for simplicity.

But as I see in BZ, unfortunately all similar proposals were closed as wontfix. Is there any chance that this will be implemented sometime in the future?

I cannot say it will never be implemented but currently I would still vote to keep the current behavior.


clime commented at 2018-01-06 15:22:48:

This is likely not to happen currently. We might do it in future at some point if have a clear idea how to do it relatively simply.

copr edit-package-tito on pypi package redefines it

Original issue: https://pagure.io/copr/copr/issue/66
Opened: 2017-05-19 02:24:39
Opened by: brianjmurrell

If I run copr edit-package-tito --name foo --git-url ... --git-branch on a package that was created with copr add-package-pypi it will redefine the package from pypi to tito.

I think this violates the principle of least surprise. My expectation is that that would produce an error that you cannot use edit-package-tito on a pypi-source package.


clime commented at 2017-06-05 13:32:56:

It's true that it might be an unwanted behavior. It's like that, however, to make package type changes possible (e.g. from tito to scm or vice versa).


praiskup commented at 2020-11-18 08:02:49:

This is though the only way to re-define the package from e.g. the -pypi to e.g. -scm. As for example discussed recently.

I'm sure currently it is not a 100% ideal UI, though changing this would require us to add new
CLI, Web UI forms and python API interface, and the work on that would be expensive. While
I appreciate any RFE, we won't have time for working on this issue. So I'm closing
this. Certainly through, pull requests on this topic would be accepted.

PyPI uploads appear to be quite out of date

Original issue: https://pagure.io/copr/copr/issue/73
Opened: 2017-06-01 09:46:46
Opened by: ncoghlan

Both https://pypi.python.org/pypi/copr and https://pypi.python.org/pypi/copr-cli were last updated in August 2016, and refer to fedorahosted for the COPR project URL.

The release records for the Pagure repository show 1.77 being tagged significantly more recently than that, and the current setup.py has the expected Pagure URL listed.


msuchy commented at 2017-08-04 10:11:02:

Updated.

[RFE] Please support webhooks integration with GitLab

Original issue: https://pagure.io/copr/copr/issue/51
Opened: 2017-03-20 11:34:25
Opened by: thozza

COPR currently supports webhooks integration with pagure.io and github.com, but it lacks support for gitlab.com (or any instance of GitLab).

We have a project Bughunting (https://gitlab.com/bughunting/bughunting) hosted on gitlab.com and are using Tito. We would like to automatically build RPMs in COPR (http://copr-fe.cloud.fedoraproject.org/coprs/thozza/bughunting/), but currently this can not be configured.

The GitLab documentation regarding webhooks is available here: https://gitlab.com/help/user/project/integrations/webhooks


clime commented at 2017-05-24 17:10:29:

Implemented by https://pagure.io/copr/copr/c/c337932ca848aaa09ac2fd9b3b08aa57a482e274. It will be in production tomorrow.

[RFE] copr-cli should be able to show repo link based on copr name and chroot

Original issue: https://pagure.io/copr/copr/issue/49
Opened: 2017-03-20 09:19:29
Opened by: hhorak

I haven't found an easy way to get the repo link using the copr cli, when I have just the copr name and chroot name. This would be very handy for automation, since we have tooling that gets copr name and we cannot hard-code the repo links. This is how it could work:

$> copr get-repo <copr> <chroot>

For example:

$> copr get-repo hhorak/mariadb-wrapper epel-7-x86_64
https://copr.fedorainfracloud.org/coprs/hhorak/mariadb-wrapper/repo/epel-7/hhorak-mariadb-wrapper-epel-7.repo

Or directly print output:

$> copr get-repo hhorak/mariadb-wrapper epel-7-x86_64
[hhorak-mariadb-wrapper]
name=Copr repo for mariadb-wrapper owned by hhorak
baseurl=https://copr-be.cloud.fedoraproject.org/results/hhorak/mariadb-wrapper/epel-7-$basearch/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/hhorak/mariadb-wrapper/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1

praiskup commented at 2017-03-20 10:50:21:

Yeah, there's dnf -y copr enable hhorak/mariadb-wrapper, but I can confirm that's not exactly the same thing (apart from the fact there are still problems with yumutils && copr)


hhorak commented at 2017-03-20 16:52:24:

I saw there is some guessing implemented in the copr plugin, so maybe taking this guessing to the copr-cli would make sense.. (expecting the dnf-copr-plugin requires copr-cli, if not, then to some shared library..?)


praiskup commented at 2017-03-21 07:37:38:

Requiring copr-cli or python-copr by python-dnf-plugins-core is not an option IMO. Run-time requirements would affect all installations. Maybe there's some way to have reverse dependency .. and share the code for chroot guessing.

Also, if your request was to get the link only -- what about to add --repo-link-only option to dnf-plugins-core?

I see two major issues ATM in this toipc. (a) there's no yum-based alternative for 'dnf copr enable' and (b) other copr instances don't have such automatic dnf copr enable feature. So to me, those issues smell like we want to have a copr-cli enable (too?).

This is off-topic, but it would be more than nice to have implemented some dependency mechanism among repositories. Some coprs depend on other coprs, and user needs to enable all of them manually ... so maybe this is also a hint to do the "enable" job in copr-cli.


hhorak commented at 2017-03-21 18:43:22:

Since at least the repo content can be showed quite easily, because URL is in expected format, I can do something like this:

curl https://copr.fedorainfracloud.org/coprs/${copr}/{chroot_without_arch} > /etc/yum.repos.d/${copr_without_slash}.repo

which I haven't realized before and should be enough for my use case.

Anyway, the ideas around copr-cli enable look good to me..


msuchy commented at 2017-03-22 10:54:35:

Group repos have slightly different URL.

Pagination of packages/builds loads everything, and is slow

Original issue: https://pagure.io/copr/copr/issue/54
Opened: 2017-04-04 17:03:37
Opened by: tvon

When there are a large number of packages there are some usability issues with pagination. E.g.:

https://copr.fedorainfracloud.org/coprs/g/rubygems/rubygems/packages/

So far as I can tell there is pagination but it is entirely client side (!?). The entire table must be loaded before it is then paginated. In my case this will crash Firefox on OSX and kill Chrome for a bit before the pagination kicks in.


praiskup commented at 2019-10-12 21:47:37:

The pagination has been enhanced recently, I believe the issues are over. Please reopen if you still experience issues.


praiskup commented at 2019-10-12 21:48:37:

Ok, no. Such large pages are still slow.


praiskup commented at 2021-08-19 11:44:35:

Another report: https://bugzilla.redhat.com/show_bug.cgi?id=1995121


frostyx commented at 2021-08-20 14:46:14:

@praiskup do we know whether the page rendering chokes or whether the database query takes for ages?


praiskup commented at 2021-08-20 15:11:54:

This used to be a problem with rendering; the query was pretty optimized if I remember correctly (but I did not check the logs now).

FTR., I tested copr list-builds yesterday, and it took 63 minutes (but finished eventually).
copr list-packages did not work for me.


praiskup commented at 2021-09-22 15:17:18:

Commit 8684ff8c fixes this issue


praiskup commented at 2021-09-22 15:17:19:

Commit 51d484a5 fixes this issue


praiskup commented at 2021-09-22 15:17:20:

Commit d84a7cff fixes this issue

Let's have really "live" logs with copr-rpmbuild

Original issue: https://pagure.io/copr/copr/issue/78
Opened: 2017-06-09 08:51:18
Opened by: praiskup

I have't checked that, but it seems to me that this commit f4561c1 is not in in copr-rpmbuild yet. That's needed to have nice and line-by-line live logs.


praiskup commented at 2017-06-09 11:15:50:

This makes the log much more readable for the progress bars too, instead of:

  Installing       : nss-sysinit-3.30.2-1.0. [                        ] 119/168
  Installing       : nss-sysinit-3.30.2-1.0. [==                      ] 119/168
  Installing       : nss-sysinit-3.30.2-1.0. [=====================   ] 119/168
  Installing       : nss-sysinit-3.30.2-1.0. [======================= ] 119/168
  Installing       : nss-sysinit-3.30.2-1.0.fc26.x86_64                 119/168 

  Installing       : nss-tools-3.30.2-1.0.fc [                        ] 120/168
  Installing       : nss-tools-3.30.2-1.0.fc [=                       ] 120/168
  Installing       : nss-tools-3.30.2-1.0.fc [==                      ] 120/168
  Installing       : nss-tools-3.30.2-1.0.fc [===                     ] 120/168

You'll see only:

  Installing  : nss-tools-3.30.2-1.0.fc26.x86_64                        120/168 

praiskup commented at 2017-06-14 15:11:17:

Interesting fact is that ppc64le's mock thinks that the stdout is not attached to terminal, because the log is not that verbose as the one on x86_64:
https://copr-be.cloud.fedoraproject.org/results/praiskup/test-ppc64le-build/fedora-rawhide-ppc64le/00565283-quick-package/builder-live.log


clime commented at 2017-10-06 10:27:08:

I have implemented continuing line filtering from f4561c1 in https://pagure.io/copr/copr/c/0573d0630386492e89a8b5e942661398d661e6c4?branch=master. Let me know if you find it ok. I didn't think it was necessary to fitter also color codes as in f4561c1 but might be wrong about it.


praiskup commented at 2017-10-06 16:52:33:

Interesting, who is responsible for terminal emulation (previously done by unbuffer) so mock isn't "silent"?


praiskup commented at 2017-10-06 16:55:43:

Don't you have the resulting live log example? :)


clime commented at 2017-10-06 16:58:39:

Interesting, who is responsible for terminal emulation (previously done by unbuffer) so mock isn't "silent"?

unbuffer is still used:

https://pagure.io/copr/copr/blob/master/f/rpmbuild/copr_rpmbuild/builders/mock.py#_88


clime commented at 2017-10-06 17:01:06:

Don't you have the resulting live log example? :)

Here:

builder-live.log


praiskup commented at 2017-10-06 17:02:32:

Ah, cool :-) but why not to drop also the terminal control sequences (colors, etc)?


clime commented at 2017-10-06 17:12:35:

Ah, cool :-) but why not to drop also the terminal control sequences (colors, etc)?

Is this https://pagure.io/copr/copr/blob/f4561c149893/f/builder/copr-builder#_77 dropping anything else than colors?


praiskup commented at 2017-10-06 17:12:38:

That said, if there's a terminal emulator -- lot of utilities behave so they think the log is printed on terminal (but the log is not a terminal...). This might make the log less readable (I have to recall where I faced this issue ... maybe GNU utilities built on top of automake's testsuite? or maybe gcc error reporting? dunno) . But filtering this out IMO makes sense, unless we can teach lighttpd to do it for the log files.


clime commented at 2017-10-06 17:14:32:

That said, if there's a terminal emulator -- lot of utilities behave so they think the log is printed on terminal (but the log is not a terminal...). This might make the log less readable (I have to recall where I faced this issue ... maybe GNU utilities built on top of automake's testsuite? or maybe gcc error reporting? dunno) . But filtering this out IMO makes sense, unless we can teach lighttpd to do it for the log files.

Well, user can also run it locally and then it's printed to terminal but colors/anything else is lost.


praiskup commented at 2017-10-06 17:16:02:

Yes, so maybe this shouldn't be implemented in copr-rpmbuild, but rather in the wrapping code on backend.

Is this https://pagure.io/copr/copr/blob/f4561c149893/f/builder/copr-builder#_77 dropping anything else than colors?

I think it should drop all terminal control sequences, and only them. But there might be bug, dunno.


praiskup commented at 2017-11-23 09:41:12:

I think terminal control sequences should be dropped - there's no need to not drop them. But nevermind (I'll file a regression bug once re-observe the original issue). The log now seems to be nicely responsive, thanks.

copr-rpmbuild: make the config "ini" file configurable

Original issue: https://pagure.io/copr/copr/issue/80
Opened: 2017-06-09 10:48:19
Opened by: praiskup

Similarly to copr-builder, please add the --config option so other copr instances can install the same package (copr-rpmbuilder) and just install additional config file. Ideally, It would be nice if you provided all the configuration files alternatives for '*.conf' in:
https://pagure.io/copr/copr/blob/f70e4cc967b9984236ebc177c93ac78c52579988/f/builder


frostyx commented at 2017-09-14 12:24:42:

Since copr-rpmbuild rewrite to python, this is possible with

-c CONFIG, --config CONFIG

clime commented at 2017-09-14 12:51:48:

Cool, this is done then.


praiskup commented at 2017-09-15 11:11:18:

Thanks!

copr-builder does not clean state after build

Original issue: https://pagure.io/copr/copr/issue/69
Opened: 2017-05-23 08:30:04
Opened by: clime

Hello, if I run several builds in the same project and in the same chroot (not sure if this is necessary), then instead of actual building, the build results of the previous build are discovered and immediately copied to backend.


praiskup commented at 2017-05-27 14:20:26:

There's:

````
261 /var/lib/copr-builder/results)
262     # We know that it is safe to remove everything from here.
263     rm -rf "$opt_resultdir"
264     mkdir "$opt_resultdir"
265     ;;
```

If I run several builds in the same project and in the same chroot

What are the exact steps to reproduce this?


praiskup commented at 2017-07-24 16:18:48:

Ping. Using copr-builder internally in production for a while I'm not able to reproduce. Perhaps we could close this?


praiskup commented at 2017-11-23 09:27:13:

IMO copr-builder cleans the resultdir (copr-rpmbuild does not). Considering this notabug, then.

run autogen/configure before trying to build RPM

Original issue: https://pagure.io/copr/copr/issue/60
Opened: 2017-04-21 13:58:15
Opened by: brianjmurrell

It would be nice to be able to build from git[hub] projects that are autotools based.

But in some such cases, it could be that the SPEC file is autogenerated by autoconf from a .spec.in file

It would be nice to be able to specify for such a project steps that need to be done after checkout but before looking for a .spec file to build from.


brianjmurrell commented at 2017-04-21 16:23:10:

In fact I am having a hard time finding a project with a .spec file and not .spec.in files. It's a lot more prevalent than I thought.


praiskup commented at 2017-04-21 17:23:26:

The major problem of this feature request is "how to specify the build requirements". Autotools-based project usually depend on gnulib, , gettext, libtool, automake, help2man, ... but more exotic tools are often also required.

Also the initial step is non-trivial with autotools, sometimes there's just 'autoreconf -vfi' needed, but some project specific ./autogen.sh scripts or ./bootstrap scripts are used. etc. Also note there's a good discussion in https://bugzilla.redhat.com/show_bug.cgi?id=1420540


brianjmurrell commented at 2017-04-24 14:57:10:

The major problem of this feature request is "how to specify the build requirements". Autotools-based project usually depend on gnulib, , gettext, libtool, automake, help2man, ... but more exotic tools are often also required.

True enough. Perhaps an additional field in a copr job to be interpreted exactly like a BuildRequires: in a spec file would suffice.

Also the initial step is non-trivial with autotools, sometimes there's just 'autoreconf -vfi' needed, but some project specific ./autogen.sh scripts or ./bootstrap scripts are used. etc.

Indeed. This would be another additional field of "commands" needed to get to the point of having a .spec file ready to use.

Also note there's a good discussion in https://bugzilla.redhat.com/show_bug.cgi?id=1420540

Yes, indeed. I didn't realise that tickets for copr were being tracked in BZ also.


praiskup commented at 2017-04-24 15:15:07:

True enough. Perhaps an additional field in a copr job to be interpreted exactly like a BuildRequires: in a spec file would suffice.

When you wrote it this way, there already is such feature; though that's not implemented per-package, but rather per-chroot (I'm not sure whether we do mind), that's: Settings -> chroot "Edit" button -> Packages.

Indeed. This would be another additional field of "commands" needed to get to the point of having a .spec file ready to use.

Right. I'll try to implement this into srpm-tools, if nobody is faster.


clime commented at 2017-11-02 12:18:05:

Hi,

this feature is now available with SCM source type and 'make srpm' method.

Docs: https://docs.pagure.org/copr.copr/user_documentation.html#scm
Blog post: https://clime.github.io/2017/10/24/COPR-SCM.html#advanced-howto

Thank you for this suggestion

[RFE] Use Kubernetes instead of Openstack for builders and docker containers on dist-git

Original issue: https://pagure.io/copr/copr/issue/77
Opened: 2017-06-08 08:30:29
Opened by: vrutkovs

I'd like to propose to use containers running in Kubernetes (or Openshift Origin) instead of VMs for builders and local Docker containers on dist-git machine.

The benefits:

  • Development instance of Copr can be run and tested locally using docker-compose tool (I started working on this in https://pagure.io/fork/vrutkovs/copr/copr/branch/docker-compose)

  • Kubernetes/Openshift has internal garbage collector, quota and limit management - this should improve resource allocation

  • Splitting Copr into several containers would simplify logging, monitoring and new version deployment

I'm going to work on this, but first I'd like to ask if there are any known issues with building RPMs in containers I should be aware of - in case its not even worth pursuing


clime commented at 2017-06-08 10:44:27:

I'm going to work on this, but first I'd like to ask if there are any known issues with building RPMs in containers I should be aware of - in case its not even worth pursuing

No, I think it's a good idea.


praiskup commented at 2017-06-09 10:03:50:

https://bugzilla.redhat.com/show_bug.cgi?id=1336750
https://bugzilla.redhat.com/show_bug.cgi?id=1334701
(I'll post more links related to this topic, if I'll remember ...)

Basically the blocker is: in OpenShift you'll never get privileged container (as far as I understand the security issues), and it prevents you from using mock there. Using docker in isolation doesn't make sense, there's systemd-nspawn support in mock .. but you still need to run this on VM.


vrutkovs commented at 2017-06-09 10:09:03:

Its fairly easy to run a privileged container in Openshift: https://docs.openshift.com/container-platform/3.5/admin_guide/manage_scc.html#grant-access-to-the-privileged-scc.

Last comments in https://bugzilla.redhat.com/show_bug.cgi?id=1336750 mention that mock needs just SYS_ADMIN privilege - I'll play with that again.

Using docker in isolation doesn't make sense, there's systemd-nspawn support in mock

Correct, although the plan is to make use of Openshift's quotas and limits to make sure the system is always stable. It would also allow us to scale system easily. systemd-nspawn support is not related, but also it won't stand in our way.

Note, that initially I'm planninf to make copr run for debug purposes using docker-compose - there is a long way until actual Openshift deployment and security discussions


praiskup commented at 2017-06-09 10:15:57:

I don't claim it is "techincal" problem, that's "security" problem, and unless you start your own OpenShift for copr purposes, you'll never get generally available OpenShift instance allowing you to run copr builders there... (and it makes this pretty useless feature at least from my POV).
Without virtualization, granting copr users to access mock in --privileged container means that users can arbitrarily and trivially control the whole openshift cloud.

The good step forward is to actually fix mock to work properly in non-privileged container first.


praiskup commented at 2017-06-09 10:17:46:

the good step forward is to actually fix mock to work properly in non-privileged container first.

Of course such mock wouldn't be able to build any package in the wild, but only particular subset of all packages.


vrutkovs commented at 2017-06-09 10:19:04:

you'll never get generally available OpenShift instance allowing you to run copr builders there...

I can always setup my own instance with whatever permissions are required, that shouldn't be a problem.

granting copr users to access mock in --privileged container means that users can arbitrarily and trivially control the whole openshift cloud

While "arbitrarily" is debatable it surely isn't "trivial"

The good step forward is to actually fix mock to work properly in non-privileged container first.

I'll check if SYS_ADMIN is sufficient for mock to do the job - that should be safe enough, as its a usernamespaced SYS_ADMIN


praiskup commented at 2017-06-09 10:28:25:

I can always setup my own instance with whatever permissions are required,
that shouldn't be a problem.

Well, in copr-world we need to be slightly more careful than we are in Koji
world (for example) because anybody can build there; the builders need to be
perfectly protected so that Bob user can't affect Alice's builds... never.

So while I admit it is possible to have your own kube cloud, it would be rather
development stuff -- not production solution. But yeah, we need to move
somewhere so +1 for any progress this way. I just answered your question:

I'm going to work on this, but first I'd like to ask if there are any known issues with building RPMs in containers I should be aware of


msuchy commented at 2017-06-20 09:52:40:

I agree that this is premature. Mock still does not run flawlessly in docker. See https://bugzilla.redhat.com/show_bug.cgi?id=1416813
and relevant https://bugzilla.redhat.com/show_bug.cgi?id=1336750
I would welcome some testing and patches and documentation to Mock itself. And only when everything is settled down with Mock we can think about Copr.


vrutkovs commented at 2017-06-20 10:17:08:

This is a chicken-and-egg problem - its very inconvenient to test Mock in docker without a proper frontend and options (read: Copr). And vice versa - its pointless to go on with 'builders in Openshift' when mock in docker is poorly tested.

Anyway, in https://pagure.io/fork/vrutkovs/copr/copr/branch/docker-compose I've got minimal copr setup. Mock is being run locally in 'backend' container, this should be sufficient to try it out. Any feedback is welcome.


praiskup commented at 2017-06-22 11:09:15:

This is a chicken-and-egg problem - its very inconvenient to test Mock in docker without a proper frontend and options (read: Copr).

This is not chicken-egg problem. Mock is and always was naturally used by package maintainers without Copr frontend (not inconvenient).


clime commented at 2017-06-23 12:30:17:

I agree that this is premature. Mock still does not run flawlessly in docker. See https://bugzilla.redhat.com/show_bug.cgi?id=1416813
and relevant https://bugzilla.redhat.com/show_bug.cgi?id=1336750
I would welcome some testing and patches and documentation to Mock itself. And only when everything is settled down with Mock we can think about Copr.

This is really a nice work. The first commit could be made as PR immediately. It will actually break testing framework right now, but fixing it should be relatively easy.

Note, however, that I would actually much prefer to use Ansible for the container setup instead of Dockerfiles and docker-compose.

If you know ansible or you are willing to learn it, you can do it yourself. Otherwise you can make a PR immediately and we will continue on that.

The second commit (the dist-git one) might be used only in case we will stick to actually "building srpms" on the dist-git machine. Currently, we want only fetch spec, fetch sources, and import it all to dist-git without actually building anything (MockSCM provider is currently an example of that).

What you can focus on, however, is making copr-rpmbuild run in a openshift container.


vrutkovs commented at 2017-06-23 12:57:19:

Note, however, that I would actually much prefer to use Ansible for the container setup instead of Dockerfiles and docker-compose.

I agree, however ansible-container is not yet stable now. I think it would be a good idea to do that when it hits 1.0 milestone

The second commit (the dist-git one) might be used only in case we will stick to actually "building srpms" on the dist-git machine. Currently, we want only fetch spec, fetch sources, and import it all to dist-git without actually building anything (MockSCM provider is currently an example of that).

Right, although this behaviour is configurable - its just this container is setting it by default. I'll think about a better way to enable/disable this.

What you can focus on, however, is making copr-rpmbuild run in a openshift container.

Right, that's my current focus atm.


clime commented at 2017-06-23 15:48:00:

Note, however, that I would actually much prefer to use Ansible for the container setup instead of Dockerfiles and docker-compose.

I agree, however ansible-container is not yet stable now. I think it would be a good idea to do that when it hits 1.0 milestone

I was actually thinking about using this http://docs.ansible.com/ansible/docker_container_module.html#docker-container but feel free to use anything else. Also you may just send the PR now and we can continue on it after merge.

The second commit (the dist-git one) might be used only in case we will stick to actually "building srpms" on the dist-git machine. Currently, we want only fetch spec, fetch sources, and import it all to dist-git without actually building anything (MockSCM provider is currently an example of that).

Right, although this behaviour is configurable - its just this container is setting it by default. I'll think about a better way to enable/disable this.

We won't be using containers on dist-git (likely). They were needed for security when we were building srpms as root inside mock chroot. This is no longer happening and the code can be rewritten to something more simple.


vrutkovs commented at 2017-06-23 15:55:47:

I was actually thinking about using this http://docs.ansible.com/ansible/docker_container_module.html#docker-container but feel free to use anything else

Oh, I see - the plan is replace docker-compose with ansible-based orchestration? That's a good idea, although its not really handy for local development. I'll look into this, but this is not a priority yet.

We won't be using containers on dist-git (likely)

Oh, I see. I started this when this code was still in place, sure, I'll rewrite that part


msuchy commented at 2020-11-23 19:56:57:

Copr's parts can be run in containers for some time. The builders are still problematic. And even after 3 years, Mock still has issues running in the container. Although the setup is fully Ansible-ized.

I do not expect any more work on this one.


praiskup commented at 2020-11-23 20:09:51:

Agreed, with one fix ....

Copr's parts can be run in containers for some time. The builders are still problematic. And even after 3 years, Mock still has issues running in the container.

Mock doesn't have the issue anymore, speaking of rootless podman containers.
https://github.com/rpm-software-management/mock/wiki#mock-inside-podman-fedora-toolbox-or-docker-container

But still it isn't solution for all packages.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.