Comments (10)
I'll take a look now.
from l4v.
I had been wanting these to give me a checkout that can work with MCS proofs, to the extent that we have them, if only I switch l4v to the rt branch.
And now I see that my thinking here doesn't make sense. Or at least, it only made sense as long as the MCS proofs and C code were not formally connected, which is no longer the case.
So yes, you're entirely correct.
I'll have a go at implementing cpp-compatible updates for mcs.xml
...
from l4v.
@mbrcknl do you have time to look into this, or should I have a go later this week? We're currently not deploying any more to the verification manifest, so we should not leave it too long.
from l4v.
@lsf37 Thinking about how to do this...
Do you think it would be worth pulling the mcs-export
job out into a separate workflow? Just thinking that proof-deploy.yml is the wrong place for it, since proof-deploy is supposed to be a final check on a manifest we already believe is deployable. And therefore it shouldn't contain things not related to the deployability of a manifest.
Taking it out would make things more complicated, since I'd still need separate workflows for the MCS export and decompilation to stay within the 6 hour budget. But it would limit the damage if something like this comes up again.
I think the way I would do it would be to add a new workflow (say binary-mcs.yml) which only runs on commits to rt
. It would do the MCS export, and then trigger the decompilation workflow in much the same way that proof-deploy.yml triggers it for master.
At this stage, I probably wouldn't want to add it to the proof.yml workflow, since I wouldn't want it to run on every PR.
Would that make sense?
from l4v.
Yes, that sounds eminently reasonable.
from l4v.
@lsf37 Thinking about this some more...
IIUC, proof-deploy runs if there's a push to l4v master, or a manual bump of the devel.xml manifest. That's useful for BV, since it roughly corresponds to the cases BV should check for non-MCS.
But currently, the preprocess test only checks non-MCS configurations. So a manual bump is only needed when there is a change to the non-MCS parts of seL4 C code. Changes that only touch MCS parts will be automatically deployed to the default manifest. Do I read that right?
Now that you're starting to work on the C proofs for MCS, would this be a good time to make the preprocess-deploy workflow fail on seL4 changes that only touch MCS? I.e. require a manual bump in those cases? I notice that the seL4 PR checks already do this.
With stronger preprocess checks, the MCS export for BV could be usefully triggered on the manifest-update
event, as well as pushes to the l4v rt branch.
Later, when MCS CRefine is done, there'll be the problem that we only want proof-deploy to push a new default manifest if both the MCS and non-MCS proofs pass. At that point, I guess proof-deploy would need to deal with both rt and master branches of l4v in a single workflow. But since we're not there yet, it probably makes sense to have a separate MCS export workflow for now.
I made a PR for the preprocess-deploy workflow: seL4/seL4#876. Though, I'm happy to drop it if you don't think we're ready for it yet.
from l4v.
Hm, it's a bit more complicated than that. The current auto-deployments are:
- seL4 auto-deploys to
devel.xml
. cpp-compatible updates are not recorded indefault.xml
- l4v auto-deploys to
default.xml
(based on runs fromdevel.xml
+ push to l4v master) - a manual pre-process bump will manually update
devel.xml
and will trigger anl4v
run, which will auto-deploy todefault.xml
Changes that are MCS relevant don't make sense to record in devel.xml
, because that manifest doesn't work for MCS proofs. Neither does default.xml
. So auto-deployment as such doesn't make sense for MCS changes overall and I don't think it makes sense to check for them in the auto-deploy test.
We do test for MCS changes on pull requests, so we do know that stuff will break if merged.
To do this properly, we should set up auto-deployment to mcs.xml
and actually build CSpec
etc from there, as well as do manual bumps of that manifest when preprocess sync fails there.
Edit: we might need an additional mcs-fixed.xml
or similar that records working hashes like default.xml
does for master.
from l4v.
- seL4 auto-deploys to
devel.xml
. cpp-compatible updates are not recorded indefault.xml
Right you are. I didn't actually look at the seL4-pp script, and just assumed it updated both devel.xml
and default.xml
.
Is it a deliberate choice to avoid cpp-compatible updates to default.xml? What's the rationale? (Not that it matters for this issue.)
- l4v auto-deploys to
default.xml
(based on runs fromdevel.xml
+ push to l4v master)- a manual pre-process bump will manually update
devel.xml
and will trigger anl4v
run, which will auto-deploy todefault.xml
I did understand those two parts, though.
Changes that are MCS relevant don't make sense to record in
devel.xml
, because that manifest doesn't work for MCS proofs. Neither doesdefault.xml
. So auto-deployment as such doesn't make sense for MCS changes overall and I don't think it makes sense to check for them in the auto-deploy test.
Right, I guess I'd been looking at the devel.xml
and default.xml
manifests differently. I had been wanting these to give me a checkout that can work with MCS proofs, to the extent that we have them, if only I switch l4v to the rt branch. But in your view, IIUC, we shouldn't expect that in a strong sense, since the automation might put a seL4 hash into those manifests that doesn't parse for MCS, if we don't pay attention to the preprocess/c-parser tests on seL4 PRs. We can mostly expect it, though, since we normally do pay attention to those tests.
We do test for MCS changes on pull requests, so we do know that stuff will break if merged.
Yep, I saw that.
To do this properly, we should set up auto-deployment to
mcs.xml
and actually buildCSpec
etc from there, as well as do manual bumps of that manifest when preprocess sync fails there.Edit: we might need an additional
mcs-fixed.xml
or similar that records working hashes likedefault.xml
does for master.
Yes, the main reason I wanted to get an MCS-compatible checkout from devel.xml
was that mcs.xml
isn't automatically kept up-to-date with cpp-compatible seL4 versions like devel.xml
is. And perhaps also because I wouldn't expect people to feel the same pressure to perform manual manifest bumps to mcs.xml
as we are for devel.xml
, when updates aren't cpp-compatible.
I'll think about this some more...
from l4v.
Is it a deliberate choice to avoid cpp-compatible updates to default.xml? What's the rationale? (Not that it matters for this issue.)
IIRC the original design was that any push to devel.xml
would trigger a proof run, which would then update default.xml
. At least I think that's what we used to do on Bamboo. I removed the trigger for pushes that came from CI because I wanted to reduce the number of proof runs to keep cost down. The pre-process test checks for AST equivalence and I think CSpec + dependents would rebuild on any text change (incl comments) visible in the preprocessed sources. Maybe that's not such a big deal, but I opted for conservative for now.
If we wanted to change it, it'd be just this one line here in the verification manifest trigger workflow that would need to be removed.
Alternatively we could update default.xml
directly from the preprocess script, but then we could start to get race conditions between updates coming from l4v
and updates coming from seL4
, esp since those test runs have very different running times. I.e. a later seL4
run could be overwritten by an earlier l4v
run. It'd all smooth out over time, but having a single source for updates still sounds safer to me.
from l4v.
I think we may actually soon approaching a state where the your original idea might be right, where the l4v master
and rt
branches should always work on the same seL4
hash. We're not quite there yet, but once CRefine is finished for at least one architecture, I think we might want to have that sate.
from l4v.
Related Issues (20)
- should `corres_cases` also do case distinction on `if`?
- Cannot load theory file "l4v/proof/crefine/autocorres-test/Refine_C.thy" HOT 4
- Investigate adding `projectKOs` to the simp set HOT 1
- Remove instances of `UNIV <\inter>` from CRefine
- Cleanup CRefine Wellformed_C
- Cleanup post-x64 comments
- Document mysterious useful comments for ctac and ceqv
- SIMPL: don't print `_'proc`
- CI artifact upload uses clashing artifact names
- Methods such as monadic_rewrite_symb_exec_r should warn that discharging side-conditions failed
- Safer vcg in CRefine
- Sub-term to free variable lifting tactic
- Have wp warn when resulting in a goal with a schematic assumption. HOT 15
- Some way of blocking `simp` from unifying schematics HOT 1
- Decide on style for `[def]rule_tac ... [and ...] in ...` HOT 7
- Investigate further use of `none_top/none_bot` in the proofs
- Enhance C Parser to allow named array bounds HOT 3
- Can Isabelle's custom functions and data types be combined with the C-parser?
- take L4V_PLAT into account in rebuilds
- support strongest-postcondition reasoning HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from l4v.