Coder Social home page Coder Social logo

prismmodelchecker / prism Goto Github PK

View Code? Open in Web Editor NEW
147.0 147.0 66.0 35.05 MB

The main development version of the PRISM model checker.

Home Page: http://www.prismmodelchecker.org/

License: GNU General Public License v2.0

Makefile 1.31% C 23.27% Shell 1.98% Pascal 0.11% CSS 0.01% TeX 0.01% Python 0.33% Batchfile 0.11% Java 35.93% HTML 29.58% C++ 6.65% M4 0.03% Lex 0.03% Yacc 0.09% NSIS 0.02% Perl 0.01% Scheme 0.01% Raku 0.44% Vim Script 0.01% Roff 0.06%

prism's People

Contributors

chrisnovakovic avatar davexparker avatar fdannenberg avatar forejtv avatar hotblack77 avatar kleinj avatar ludwigpauly avatar maxkurze1 avatar merkste avatar nishanthan avatar phate09 avatar t-nojiri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prism's Issues

Humongous amounts of memory needed to parse large models

We are trying develop some new algorithms for CTMDP reachability in PRISM and are facing some major issues with the parsing of large models. As discussed in issue #27, I had made just a few modifications in order to input CTMDPs in the same format as MDPs.

The amount of memory PRISM needs to parse a model of size 160MB exceeds 25GB. I was wondering why this happens, and why parsing is so slow (took around 6-7 minutes with 52 threads, not all running simultaneously). Would there be any relatively simple way to get around this limitation? In comparison, IMCA is able to solve this CTMDP in around the same time.

My first thoughts were to use the explicit format (tra). I assumed that the ModulesFile object coming out of parsing the explicit format can be used to instantiate a ModelGenerator in the same way as earlier, but it doesn't seem to be the case. The ModelGenerator object doesn't populate the transition list at all (on calling getTransitionList). I wonder if I am doing something wrong here.

Any hints or thoughts on how to proceed would be appreciated.

Strange NaN result for multi-objective

We are seeing a strange result for the following (heavily minimized) model:

mdp

module m
    loc : bool init true;
    yes : bool init false;

    [a] true -> 0.994005 : (loc'=true) + 0.000995 : (loc'=false) + 0.005  : true ;
    [g] true -> (yes'=true);
endmodule

Properties:

multi(Pmax=?[F false], P<=0.003[F yes])    // NaN
multi(Pmax=?[F false], P<=0.004[F yes])    // 0
multi(Pmax=?[F false], Pmin=?[F yes])      // [(0.0,0.0)]

We'd expect 0 for the first query as well.

This is using the default settings, if we switch to using the LP solver, we get the expected output. Increasing the termination epsilon does not seem to help.

Is this just due to the fact that the Pmax yields 0? I.e., could one add some pre-check that one of the target sets is empty or not reachable? Or is there something numerically going wrong?

Verbose log output for computation of the first query:

The initial target point is (1.0, 0.997)
The initial direction is (0.5007511266900351, 0.499248873309965)
Iterative method: 6 iterations in 0.00 seconds (average 0.000000, setup 0.00)
Optimal value for weights [0.500751,0.499249] from initial state: 0.499249
New point is (0.0, 9.999999999968754E-4).
Decided, target is (NaN, 0.997)
The value iteration(s) took 0.0 seconds altogether.
Number of weight vectors used: 1
Multi-objective value iterations took 0.0 s.

Value in the initial state: NaN

With smaller epsilon:

The initial target point is (1.0, 0.997)
The initial direction is (0.5007511266900351, 0.499248873309965)
Iterative method: 8 iterations in 0.00 seconds (average 0.000000, setup 0.00)
Optimal value for weights [0.500751,0.499249] from initial state: 0.499249
New point is (0.0, 0.0010000000000000002).
Decided, target is (NaN, 0.997)

And the second one:

The initial target point is (1.0, 0.996)
The initial direction is (0.501002004008016, 0.49899799599198397)
Iterative method: 2 iterations in 0.00 seconds (average 0.000000, setup 0.00)
Optimal value for weights [0.501002,0.498998] from initial state: 0.498998
New point is (0.0, 1.0).
Target lowered to (0.003984000000000114, 0.996)
New direction is (1.0, 0.0)
Iterative method: 2 iterations in 0.00 seconds (average 0.000000, setup 0.00)
Optimal value for weights [1.000000,0.000000] from initial state: 0.000000
New point is (0.0, 0.0).
Target lowered to (0.0, 0.996)
New direction is null
The value iteration(s) took 0.001 seconds altogether.
Number of weight vectors used: 2
Multi-objective value iterations took 0.001 s.

Value in the initial state: 0.0

missing states in output model

Hi,

I created a DTMC model:
======= example.pm ======
dtmc
module example
s : [0..11] init 0;
[]s=0->0.01:(s'=0)+0.40:(s'=1)+0.40:(s'=2)+0.19:(s'=8);
[]s=1->0.20:(s'=0)+0.80:(s'=3);
[]s=2->0.40:(s'=0)+0.5:(s'=3)+0.1:(s'=5);
[]s=3->0.20:(s'=0)+0.3:(s'=1)+0.3:(s'=2)+0.2:(s'=4);
[]s=4->1:(s'=4);
[]s=5->0.75:(s'=2)+0.25:(s'=6);
[]s=6->0.30:(s'=5)+0.7:(s'=7);
[]s=7->1.00:(s'=3);
[]s=8->0.9:(s'=0)+0.1:(s'=8);
[]s=9->0.35:(s'=8)+0.6:(s'=10)+0.05:(s'=11);
[]s=10->0.8:(s'=9)+0.2:(s'=11);
[]s=11->1:(s'=4);
endmodule

Then invoking prism using the following command:

prism example.pm -exportmodel example.all -const p=0.9

Instead of receiving a state machine of 12 states, only 9 states were reported in the transition matrix. I have checked the parser of the pm input file, and couldn't find problem there.

  ===== example.tra =====

9 21
0 0 0.01
0 1 0.4
0 2 0.4
0 8 0.19
1 0 0.2
1 3 0.8
2 0 0.4
2 3 0.5
2 5 0.1
3 0 0.2
3 1 0.3
3 2 0.3
3 4 0.2
4 4 1
5 2 0.75
5 6 0.25
6 5 0.3
6 7 0.7
7 3 1
8 0 0.9
8 8 0.1

Best regards,
Yijun

org.w3c.dom not accessible when compiling from source

I was compiling PRISM from source. I ran 'make' from the command line to build the C parts of PRISM first and then imported the project into Eclipse. After refreshing Eclipse, I got the error "The package org.w3c.dom is not accessible". I attached a screenshot.

I tried to download this missing package from Maven Central. I downloaded the two listed below and put them in lib subdirectory, but did not help to resolve the issue.
dom-2.3.0-jaxb-1.0.6.jar
mathml-dom-java-2.0.jar

I am using Mac version 10.13.3 and Eclipse for RCP and RAP Developers, Version: Oxygen.2 Release (4.7.2). Any help is appreciated.

screen shot 2018-03-09 at 12 43 34 am

Makefiles: Improve javac invocation

Currently, in the Makefiles, javac is called once for each .java file that was changed. javac has a non-negligible startup time and the "Java way" is to pass it multiple .java files at once.

Short of switching to another build system, we could probably improve compile times by using the pattern proposed in this stackoverflow answer.

Would need some adaption to the PRISM makefiles, but once this is done for one of them, could be replicated quite easily on all the Makefiles in prism/src/*, as those are mostly copy-pasted.

explicit/FoxGlynn.run() does not halt for certain parameters

Hi!
I am currently working on implementing arbitrary precision-version of FoxGlynn using BigDecimal.
It is going along fine. I have a testing main method to compare results and performance with the original double-precision FoxGlynn. For certain parameters, this main method runs indefinitely.
I have found out the problem lies in the original double-precision FoxGlynn.

INPUT:

double q = 4, time = 3; // q*time = q_tmax (the time parameter)
double uf = 1.0e-300; // underflow
double of = 1.0e-300; // overflow
double acc = 1.0e-25; // accuracy (of the result)
new FoxGlynn(q*time, uf, of, acc);

PROBLEM DESCRIPTION:
FoxGlynn has two approaches to computing results.

  1. "Naive" approach is chosen if the first parameter q_tmax < 400. No further parameter checking is done.
  2. "Actual" FoxGlynn is chosen otherwise. This approach first checks whether accuracy is not more precise than it can handle.
if (accuracy < 1e-10) {
   throw new PrismException("Overflow: Accuracy is smaller than Fox Glynn can handle (must be at least 1e-10).");
}

Since my input q_tmax = 12 is handled by the "naive" approach, the accuracy is not checked.
Through debugging, I found that this loop runs infinitely.

k = 1;
do {
    lastval *= q_tmax / k; // invariant: lastval = q_tmax^k / k!
    accum += lastval;
    w.add(lastval * expcoef);
    k++;
} while (accum < desval);

Eventually, the computed lastval increments get so small that adding them to accum does nothing.
Here is a debugging screenshot.
screenshot from 2018-03-31 12-00-55

WHY THIS MATTERS:
FoxGlynn is used for many explicit engine computations within CTMCModelChecker.
While some methods have hardcoded termination epsilon (which is passed as accuracy to FoxGlynn), some do not. For example, I believe CTMCModelChecker.computeInstantaneousRewards uses termination epsilon from current PRISM settings.
Say that Bob wants to compute instantaneous rewards for some CTMC model for very high precision 1.0e-30 using the explicit engine. Bob knows that such high precision computation will take a lot longer to finish, so he is prepared to wait a while. Sadly, Bob would have to wait forever (or until the w vector eats up the entire available memory, throwing java.lang.OutOfMemoryError: Java heap space).

PROPOSED SOLUTION:
Have both approaches check input parameters for too small accuracy. Perhaps even back in the FoxGlynn constructor. The "too small accuracy" problem is mostly related to double precision rather than the used approach.

Log output: Double constant parameter display values are sometimes truncated

In the log output, the values of double constants (either directly set from the command line or via experiments) are only displayed with a maximum of 6 fraction digits. In some cases, this can lead to suboptimal output:

$ prism -const k=1E-7 ....
[....]
Building model...
Model constants: k=0

The output is formatted via parser.Values.toString(), which uses parser.Values.valToString(). This converts double values with only 6 fraction digits.

It might be an option to use PrismUtils.formatDouble() instead.

GUI: Font size

On high-DPI displays on Windows and Linux under Java 8, the default font sizes are tiny. This seems to be fixed when running Java 9, as then the Java UI components are actually high-DPI aware.

For those users that want to / have to stay at Java 8, it would be nice to have a more convenient way to get to usuable font sizes.

As far as I can tell, the situation currently is:

  • Using the Increase/Decrease font size actions in the Options menu, one can increase the font size of the menus etc by +1/-1 points
  • The fonts in the model and property editor can be changed from the font selector in the settings dialog.
  • There are quite a few fonts that are not actually adjusted / adjustable.

I would propose the following:

  • Introduce a global font scale factor, which can be adjusted via the Increase/Decrease actions (perhaps in +/-10% steps?)
  • Add some sub-menu there to allow resetting to 100% and to some convient value for high DPI displays (200%, 300%?)
  • During rendering, adjust all font sizes by this factor. So, if the model editor font from the settings dialog has 20pt font size, that would be scaled as well.
  • Identify all the locations where currently fixed font sizes are used (probably sufficient to scan the call sites for new Font) and make them take the factor into account.

Some related items:

  • Increasing the font size via the menu actions leads to white bars appearing on the left for some of the boxes in the simulator view
  • The current hot-key for "Increase" (Cmd+SHIFT+=) does not work on German keyboards and OS X. Can we use + instead of SHIFT+= in the code?
  • I've seen long error messages (such as the overflow error during symbolic model building) leading to an error dialog box that was wider than the screen. Perhaps we can use some other widget for the text, there?

Explicit -steadystate computation broken for initial BSCCs

As noted by @merkste in PR #54, the logic to determine whether all initial states (states with probability >0 in the initial distribution) are contained in a single BSCC is broken.

// See which states in the initial distribution do *not* have non-zero prob
startNot = new BitSet();
for (int i = 0; i < n; i++) {
if (initDist[i] == 0)
startNot.set(i);
}
// Determine whether initial states are all in a single BSCC
allInOneBSCC = -1;
for (int b = 0; b < numBSCCs; b++) {
if (!bsccs.get(b).intersects(startNot)) {
allInOneBSCC = b;
break;
}
}
// If all initial states are in a single BSCC, it's easy...
// Just compute steady-state probabilities for the BSCC
if (allInOneBSCC != -1) {
mainLog.println("\nInitial states all in one BSCC (so no reachability probabilities computed)");
bscc = bsccs.get(allInOneBSCC);

Consider the following DTMC

dtmc

init true endinit

module m
  s: [0..1];

  [] true -> true;
endmodule

which has two initial states that each form a BSCC.

Result:

$ prism init-steady-bscc.prism -steadystate -hybrid
...

Printing steady-state probabilities in plain text format below:
0:(0)=0.5
1:(1)=0.5

$ prism init-steady-bscc.prism -steadystate -explicit
Printing steady-state probabilities in plain text format below:
0:(0)=1.0

Here, the logic in the explicit engine aborts after the first BSCC because it does not intersect with any non-initial states; it should check for containment of the whole BSCC in the initial states.

Build process: Refactor JNI .h file generation

In Java 10 (released this week), the javah tool for generating the C header files for the JNI native functions has been removed (see http://openjdk.java.net/jeps/313). For Java 9, it's still there but yields deprecation warnings.

Compilation still works on Java 10, as it silently uses the pregenerated .h files from the repository, but that's clearly not a good idea in the long term.

The new way of generating the .h files (available since Java 8) happens during the javac run, specifying an output directory using the -h switch. A quick look at the current package Makefiles indiciates that the javac calls are done before the C/C++-compilation, so it might suffice to just add the -h switch to the normal Java compilation run.

Some things to check:

  • Are the JNI headers only included by the corresponding implementation C/C++ files in the same package/directory? I guess that's the case.
  • There is currently dos2unix post-processing for the generated .h files on Windows. I guess that's so non-changed files don't differ from the pre-generated ones in the repository?
  • Would it make sense to remove the pre-generated .h files from the repo?

CUDD crash on Win64 with JDK9

As reported in #12, PRISM GUI does not startup properly on Windows with a JDK9 early release snapshot. The 32bit version of PRISM works fine, but the 64bit version crashes quite early during CUDD initialisation in dd.dll (both xprism and standard prism). JDK8 works fine.

Some initial debugging indicates that this might be related to pointer arithmetic that CUDD uses to store complement edge bits inside DdNode pointers. If that is the only problem, it should be easy to fix.

Hang in simulator on CTMC with terminal state

Trying to replicate the bug report in https://sourceforge.net/p/prism-mc/bugs/20/, I stumbled upon another bug: The simulator hangs indefinitely sampling a path for the second property.

bug.prism:

stochastic

const int M; // number of possible elements of FGF

const double k5=1; // an hour

 module SYSTEM

        reloc : [0..M] init 0; // relocated FGFR

        [] reloc<M -> k5 : (reloc'=reloc+1);
//      [] true -> k5 : (reloc'=min(M,reloc+1));

endmodule

bug.props:

const int L;
const double T;

P=? [ true U<=T reloc>=L ]
P=? [ true U[T,T] reloc>=L ]

Invocation:

prism bug.prism bug.props -fixdl -const L=0,T=100,M=10 -sim -simsamples 100 -simpathlen 1000

Observed behaviour: PRISM does not terminate.

GUI: Saving property list fails when model has already been changed

Steps to reproduce:

  1. Create a model, containing some constant k, don't save the model yet.
  2. Create a property that references k, don't save the property list yet.
  3. Choose Model -> New -> Prism model from the menu.
  4. There's a dialog asking whether to save the model; proceed.
  5. There's a dialog asking whether to save the property list; select Yes.
  6. Now, there is an error message dialog saying Cannot save properties list: some properties are invalid.

At this point, the model editor already has an empty model and the properties editor shows the property as invalid. Further attempts to save the properties list fail as well with the same error message - until one has a model again for which the properties are not invalid.

Proposed solution alternatives:

  1. Change the Cannot save properties list: some properties are invalid error dialog from having just an Ok button to one having a save anyways option. I've found the following comment

    // only allow save if all props valid (can't reopen file otherwise)
    which seems to indicate that loading an invalid property file will be problematic, as parsing will be done via PropertiesFile. I guess that refers to properties with syntax errors, type/semantics errors like with the constants should not be a problem, right? I guess one could have some kind of ExpressionOpaque expression type that just stores an unvalidated string to optionally pass broken properties back to the GUI.

  2. Don't clear the model before the properties list is saved. I guess that could also be necessary for model loads.

GUI: Property result dialog garbles < in error message

In the dialog that shows the property results (GUIPropertyResultDialog.java), the label
that is used to display the result wraps the result string in <html>...</html> tags, but does not quote special HTML characters:

resultLabel.setText("<html>" + gp.getResultString().replaceAll("\n", "<br/>") + "</html>");

So, e.g., if an error message contains s<3, then the < symbol gets garbled. I've noticed that with the error message of the test case for #36 before the fix is applied.

Proposed solution:
If we can be sure that the result string (might be a result, an error message, ...) never has HTML content to begin with, then we can simply quote the special HTML characters. I guess we regard all PRISM input (model, properties, ...) as trusted and non-malicious and thus don't have to worry about Cross Site Scripting or other attacks, right? Thus, we should get away with only quoting <>&".

Windows: Handle -javamaxmem when starting via BAT

On Windows, the value for the Java maximum heap size is hardcoded in the BAT batch startup scripts, i.e., any -javamaxmem parameter is ignored.

It would be nice to be able to handle this parameter (and -javastack of PR #81), similar to how it's done on MacOS or Linux, but without adding additional dependencies to scripting languages etc not shipped by default on Windows. I.e., when starting via cygwin, the bash startup scripts handle this fine, but the "pure Windows" solution does not.

I see two options:

  • Handle the parameters in a script; are Powershell scripts powerful enough to do something like this?
  • Provide a small launcher binary (C++?) that just handles this limited argument parsing and then execs Java with the appropriate parameters.

Other options?

I have only very limited Windows scripting experience, perhaps there's someone who'd like to take a stab?

Wrong rate combination in CTMCs?

Hi, I'm trying to model M/M/s/K queues with Prism and I've stumbled upon a behaviour I do not understand.
I'm uploading two models that I expected to behave exactly the same but they don't.
Their difference is in the server guard.
Right asks that there are at least N requests waiting (waiting>=N) for server N to fire, while wrong asks just that there's some request waiting (waiting>0).
I don't understand why with the wrong model prism considers that the total service rate from the state where waiting=1 is the server rate times the number of servers - only one of them will manage to fire and serve the request.
The only difference that I'd expect between the two models is that the choice of server is more non-deterministic in the wrong model than the right one.

The dot diagrams of the transitions show the difference - no need for experiments really.

Nevertheless, I've also tested them with "S=? [ (waiting=n) ]", to compute the probability that there are n requests waiting (for n=0:1:Capacity), and the right model produced the answers I expected, based on the formulae I got from: http://profs.degroote.mcmaster.ca/ads/parlar/courses/o711/documents/Formulas-HL-9e-2011-10-31.pdf

Best,
Christos

MMcK-right.prism.pp.gz
MMcK-wrong.prism.pp.gz

No help available for steadystate/transient switch

The help command doesn't provide any information about the -steadystate (-ss) or -transient (-tr) switch.
(see: Running PRISM / Computing Steady-state And Transient Probabilities)

Since I'm running into the problem: "Error: Iterative method did not converge within 10000 iterations.
Consider using a different numerical method or increasing the maximum number of iterations."
, some pointers regarding which additional parameters I could try would've been very useful.

Support for HiDPI?

We are trying to use PRISM on Windows laptops with HiDPI screens (with dpi settings greater or equal to 192dpi, resolution 3200x1800 and higher), but the text and widgets are unreadably small with JVM 8.

I believe this issue is systemic between JVM and Windows OS, so I tried Java 9 which implements JEP 263 (HiDPI support for Windows), but then PRISM does not start: it shows splash screen and exits shortly. It does not show any diagnostic information on the command line either.

Do you have any treatment to address HiDPI screens?
Is there a way to turn on more diagnostics? (I have modified xprism.bat to stay with current command line window, but it does not show anything)

Exact mode should check probability sums in DTMCs

The parametric model builder that is used as well for the exact engine of PRISM normalizes the probabilities by dividing by the sum of the outgoing probabilities. Thus, in the exact mode, for

dtmc

module M1
  s: [0..2] init 0;

  [] s=0 -> 1/3:(s'=1) + 1/3:(s'=2);
  [] s>0 -> true;
endmodule

and -pf 'P=?[ F s=1 ]' -exact the result will be 1/2, as the actual probabilities in the model will be (1/3) / (2/3) = 1/2. In contrast, the symbolic and explicit engine report an error, as the probabilties don't sum to one.

Suggested fix: Provide the param engine with a flag that tells it whether it's in exact or parametric mode (depending on whether it's called from checkParametric or checkExact). In exact mode, don't normalize and throw an error when the sum of outgoing probabilities (as a BigRational) does not equal 1 for a DTMC or MDP.

Semantics for param/exact model checking of R=?[ F "target" ] in DTMC

It looks like the param engine (and thus as well the -exact model checking mode) has a slightly different semantics regarding paths that never reach the target states.

In standard PRISM, those paths are assigned an accumulated reward of +infinity and as P<1[ F "target"], the result for R=?[ F "target" ] is +infinity. For param, it looks like those paths are assigned an accumulated reward of 0.

In general, it would probably be useful to provide a way to select the desired semantics, both for the standard PRISM engines and for the param/exact engines.

Allow unbounded -maxiters

Currently, the -maxiters options only allows to override the default maximum number of iterations by a specific number. In some situations we had in practice it would have been nice to be able to have no bound at all. Currently we just specify a rather large number but that's a bit ugly.

What would be a good syntax? Some ideas:

  • -maxiters inf
  • -maxiters unbounded

Internally, it might make sense to encode that as INT_MAX and provide some helpers for checking a current iteration count against the bound. That would have the advantage over using an encoding by something like -1 that code (in other branches or extensions) that has not been converted would not fail catastrophically. As an alternative, one could also pass along a Boolean flag for unbounded, but that would touch quite a bit of JNI invocations, etc.

Handling for huge integers in the parser

When parsing something like

const double p = 2867940088951973/288230376151711744;

we get a parsing error as the numerator and denominator are treated as integer literals, which are only allowed to be in the range of Java ints.

Clearly, evaluation using the normal integer arithmetic in Java would not work. However, in the exact and parametric engines, such large constants can be handled. We might look into how to handle this better.

(from the mailing list)

Model import/export from ZIP

Currently, exporting a model to an explicit representation generates a variety of files:

  • model.sta for the state index to variable value information
  • model.tra for the actual transition relation
  • model.lab for labels
  • model.i.srew for state rewards (indexed if there are multiple reward structures)
  • model.i.trew for transition rewards

Likewise, importing a model relies on these files. By using -exportmodel model.all and -importmodel model.all, sta, tra and lab files are generated and read.

It would be nice for PRISM to support -exportmodel model.zip and -importmodel model.zip, which generates/reads a ZIP file that contains all the files above related to the model. This simplifies the handling considerably.

For an implementation, the following changes would be necessary:

  • When importing/exporting, reading and writing methods should simply deal with a BufferedReader/BufferedWriter and not a File/filename, so that input/output can be redirected to a ZIP. Particularily exporting from the symbolic engine would need some tweaks as those methods use the C FILE API.
  • ZIP handling is possible using the Java APIs in package java.util.zip

One question would be: Do you want the suffix to be .zip instead of something PRISM related? The benefit for of .zip would be that it's directly apparent to the user that they can use standard tools to inspect the contents. Perhaps something like .prism.zip?

It would also probably be a good idea to store the model type in a file in the ZIP, e.g., a file model.type with content one of dtmc,mdp, etc.

Another question is how to name the entries in the ZIP. One possibility would be to have a constant prefix, i.e., model.foo for the individual files. Another would be (for a file named A.prism.zip) to contain a folder A with contents A/A.sta, A/A.tra, etc. This would allow unpacking of multiple models without name clashes. Likewise, it might make sense to allow -importmodel A if A is a directory, searching there for the corresponding files.

Explicit engine: Transition rewards from deadlock states

Consider the model

mdp

module M1
  s: [0..1] init 0;

  // deadlock for state 1
  [] s=0 -> 1:(s'=1);
endmodule

rewards "r"
  [] true: 2;
endrewards

In state 1, the deadlock is fixed (with PRISM's default settings) by adding a new self-loop choice. Currently, in the explicit engine this artifical choice matches the [] true filter and thus gets a transition reward, while in the symbolic engine, it does not get a transition reward:

Model checking: Rmax=? [ C ]
...
Result: Infinity (value in the initial state)  // -explicit
Result: 2.0 (value in the initial state)       // -hybrid

According to @davexparker, the symbolic engine gets it right; these deadlock fix transitions should not be able to have a transition reward.

Note: Until recently, the properties supported in the explicit engine could not reveal this, but now total accumulated reward and co-safety LTL rewards can accumulate beyond deadlock states.

Wrong result for CTMC-Until with >=t bound (explicit engine)

For example, for prism-tests/functionality/verify/ctmcs/rewards/ctmc_rewards2.sm and property P=?[ F>=0 s>3 ], the explicit engine yields Infinity, while the symbolic engines yield a value ~1/3.

This is due to performing Until computations using the DTMC model checker directly on the CTMC instead of on the embedded DTMC here and here.

This only happens if there is a lower bound but no upper bound, i.e., for F>=t target or a U>=t b, CTMCs and the explicit engine.

Windows: set PRISM_PATH in the (x)prism.bat files during installation

Currently, in the prism.bat (and xprism.bat), the location of the PRISM directory is hard-coded as ..:

rem PRISM home directory
rem Default is .. so it can be run directly from the bin directory.
rem Change ".." to the actual PRISM directory to allow it to be run from anywhere.
rem An example would be: set PRISM_DIR=c:\Program Files\prism-3.2
rem Note: Do not put quotes ("...") around the path.
set PRISM_DIR=..

This should probably be replaced by the actual installation location during the installation of the binary with the installer. There seems to be a way to replace the corresponding lines via some tweaking of the NSIS installer configuration:

http://nsis.sourceforge.net/ReplaceInFile
http://nsis.sourceforge.net/Replacing_Lines_in_a_Text_File

Show example state whenever probabilties sum up to more or less than one.

Whenever one tries to build a model in which the probabilities sum up to less or more than one an error like the one below is shown:

Error: Probabilities in command 1 of module "name" sum to less than one (e.g. X) 
for some states.  Perhaps some of the updates give out-of-range values. 
One possible solution is to strengthen the guard (line Y, column Z).

This gives the user only a minimal pointer to figure out the mistake in his model.

An usefull addition to this error message would be to give an example state in which this actually is the case.

This is already being done in the GUI whenever one tries to plot a path in the faulty model.

Error: Probabilities sum up to A in state(B=0, C=1) (line X, column Y)

Explicit steady-state computation does not converge

Computing the steady-state distribution of the attached model does not converge when using the explicit engine (at least in 1e6 steps). However, using the (semi)-symbolic engines takes only about 250 iteration steps.

Parameters:

  • convergence: relative
  • epsilon: 1e-2
  • engine: explicit
  • method: power (only option for explicit steady-state computation)

Unfortunately, inspecting the code did not reveal something suspicious so far. Excluding self-loops does not make a difference. Am I missing something here or could this be due to some numerical (floating-point) issue?

spinlock_cache_agnostic.pm.zip

PTAParser.jj is outdated and does not match the rest of the codebase

Hello @davexparker , @kleinj !
I would like to report a serious error in "PTAParser.jj" I found while working with @VojtechRehak and @Lkorenciak on implementing some new features.

I (accidentally) recompiled all JavaCC files. This resulted in a build error.
After digging around, I realised that this commit
4ad8e43
has removed the default constructor for PTA, and replaced it with a parametric one.

-	public PTA()
+	public PTA(List<String> alphabet)
  	{
  		...
  	}

This would be okay, but the constructor calls were only changed in "PTAParser.java", not "PTAParser.jj".
From what I understand, "PTAParser.java" is autogenerated by JavaCC from "PTAParser.jj".
Thus, recompiling "PTAParser.jj" as of now generates an incompatible version of "PTAParser.java".

This can be easily fixed by doing the same modification as done for "PTAParser.java" in that commit to "PTAParser.jj" as well. That is, doing the following code change in "PTAParser.jj".

-                        pta = new PTA();
 +                        LinkedHashSet <String> alphabet = new LinkedHashSet<String>();
 +                        // Find alphabet
 +                        n = locationNames.size();
 +                        for (i = 0; i < n; i++) {
 +                            ArrayList<astTransition> tt = transitions.get(locationNames.get(i));
 +                            if (tt == null || tt.isEmpty()) continue;
 +                            for (astTransition t : tt) {
 +                            	if (t.action != null && !t.action.equals("")) {
 +                            		alphabet.add(t.action);
 +                            	}
 +                            }
 +                        }
 +                        // Create new PTA
 +                        pta = new PTA(new ArrayList<String> (alphabet));

Thanks!
Majo

Buggy state reward export on 32-bit linux

After the 4.4 release, the test-suite fails on linux32. This seems to be mainly due to the incorrect exporting of state rewards, more specifically, the second number on the first line (number of non-zero values) shows as zero. Examples can be found in prism-tests/functionality/export/ctmc.

The likely cause is the call to DD_GetNumMinterms on line 69 of src/mtbdd/PM_ExportVector.cc, since there was some 32/64-bit CUDD tidying done just prior to 4.4.

Generate strategy from shell

Is it possible to generate strategy and export via CLI? Something like prism game.prism props.csl -prop 1 -generatestrategy -exportstrategy stratey.txt

image

Expression simplification for 0-x does not preserve type

During expression simplification, expressions of the form 0-x (binary minus operator) are transformed to -x (unary minus operator), see

return new ExpressionUnaryOp(ExpressionUnaryOp.MINUS, e.getOperand2());
and
return new ExpressionUnaryOp(ExpressionUnaryOp.MINUS, e.getOperand2());

Unfortunately, the new unary minus expression does not get assigned an expression type. This is not necessarily fatal, but can lead in some constellations to an exception during evaluation. This is, e.g., the case for (explicit engine) model building or simulation, as there the expressions are simplified before evaluation.

Small example:

module M1
   s : [0..1] init 0;

   [] true -> 1/2:(s'=0) + 1/2:(s'=max(1,0-s));
endmodule

The update is simplified to s'=max(1,-s), with the -s leading to an evaluation exception, as the max expression tries to evaluateInt() on the -s, which has a null type.

E.g., for exact model building, we get Error: Cannot evaluate to an integer ("-s").

Fix: Preserve the expression type during simplification.
Found by Max Korn.

Parsing error when using ternary operator in Formula/Label

Whenever I try to use a ternary operator in a formula/label, I'm presented with a parsing error:


const double default_probability = 0.5;
formula: conditional_probability = (x=8 ? 0 : default_probability);

module test

    x : [0..8] init 0;

    [] true ->
    conditional_probability*0.1 : (x' = min(x + 1, 8))
    + conditional_probability*0.9 : true
    + 1 - conditional_probability: true;

endmodule

Parsing model file ".\tertiary_operator_in_formula.prism"...
Error: Syntax error (":", line 6, column 8). 

Using a ternary operator in a probability works:


const double default_probability = 0.5;

module test

    x : [0..8] init 0;

    [] true ->
    (x=8 ? 0 : default_probability)*0.1 : (x' = min(x + 1, 8))
    + (x=8 ? 0 : default_probability)*0.9 : true
    + 1 - (x=8 ? 0 : default_probability): true;

endmodule

Integrate fastutil

fastutil (github: https://github.com/vigna/fastutil) is a Java library that provides variants of Java's generic data structures, specialized for the primitive types and thus avoiding boxing ints, doubles, etc into Integers, Doubles, ...

This often improves performance and memory requirements, so it has been used by several groups in their PRISM extensions. We'd thus like to make it available in the main PRISM.

fastutil uses a templating approach to generate the Java source files for various instantiations (Long2BooleanMap, ...), which results in a large number of classes, leading to the question of how to package it for PRISM in a smart way - without unnecessarily increasing compile times, download size/time and git repository size. @incaseoftrouble and I had some discussions about the options:

  1. Dropping a JAR file of the compiled classes into lib. Currently, that would be a 17MB binary blob. It would also change every time we want to include a new fastutil release. To get convenient JavaDocs for the classes in the IDEs, we'd have to include the source files (15MB compressed, 77MB uncompressed) in the JAR.
  2. Dropping a filtered JAR into lib. There are tools that can automatically cull a JAR file to only include those classes that are actually used. That would probably reduce the JAR size quite a bit, because there are a ton of fastutil classes that will probably never be used by PRISM (e.g., all 'short', 'byte' and 'float' specializations). There is then the complication that every time some PRISM developer wants to use some previously unused part of fastutil, we'd have to rebuild the JAR and drop a new binary blob. In particular, that would complicate merging of pull requests.
  3. Including the generated Java sources for the subset of fastutil that is used by PRISM in the PRISM git repository. @incaseoftrouble wrote a script that can use the JDK's jdeps tool to find the dependencies and write the corresponding .java filenames in a text file. This would allow some automatic way to copy/keep exactly the required files. Benefits are that merging PRs, updating to a new fastutil version, importing additional fastutil classes would be regular git operations on the Java sources.

If the actual number of files that are needed is reasonable, my personal preference would be for option 3. In that case, I think the following approach makes sense: You'd have one text file that lists the fastutil dependencies that result from the PRISM source, i.e., those classes that are directly imported into PRISM source code. In a seperate step, one could then generate the list of fastutil dependencies for those classes, i.e., the dependencies inside fastutil, and include all these .java source files. In addition, it should be possibly to manually add fastutil classes to the first list, e.g., because one knows that those classes are generally useful or used in some branch. It would then be sensible to have some script support to automate the usual tasks (update to a new fastutil version, updating the java files according to the dependencies, ...).

Action synchronization broken in PTAs?

Hi, I'm getting a behaviour that seems to indicate that action synchronization is broken in PTAs (unlike DTMC).
Here's a simple model, where I'd expect the results to be the same for PTA and DTMC but PTA seems to allow a blocked action in one module to occur in another - see the probability 1.0 for eventually seenA, which I'd expect to be 0.0, as in the DTMC model.

Best,
Christos

// dtmc // Pmax =? [ F seenA ] = 0.0 Pmax =? [ F seenB ] = 0.0
pta // Pmax =? [ F seenA ] = 1.0 Pmax =? [ F seenB ] = 0.0

module AA
seenA: bool init false;

[fire] true -> (seenA'=true);
endmodule

module BB
seenB: bool init false;

[fire] false -> (seenB'=true);
endmodule

CI pipeline

As discussed previously, having a CI pipeline would be nice. Travis CI is free for open source projects. It should be reasonably easy to integrate.

Parametric/exact Rmin: wrong result

At https://groups.google.com/d/topic/prismmodelchecker/vQI-1srXSdk/discussion, Shalini Ghosh reported a small MDP with a strange result for an Rmin formula in the parametric engine. The model is attached.

The problematic formula is Rmin=?[ F "changed" ].

prism lanechange_mdp.prism -pf 'Rmin=?[F "changed" ]' --param p=0:1
Result (minimum expected reward): ([0.0,1.0]): { 1  | 2  }

prism lanechange_mdp.prism -pf 'Rmin=?[F "changed" ]' --exact --const p=0.626
Result (minimum expected reward): 1/2

prism lanechange_mdp_simple_parametric.prism -pf 'Rmin=?[F "changed" ]' --const p=0.626
Result: 1.002137421843802 (value in the initial state)

Manually restricting the MDP to the minimal strategy DTMC yields the expected results:

prism lanechange_dtmc.prism -pf 'R=?[F "changed" ]' --exact --param p=0:1
Result (expected reward): ([0.0,1.0]): { 2 p - 5  | 10 p - 10  }

prism lanechange_dtmc.prism -pf 'R=?[F "changed" ]' --exact --const p=0.626
Result (expected reward): 937/935

prism lanechange_dtmc.prism -pf 'R=?[F "changed" ]' --const p=0.626
Result: 1.0021390374331551 (value in the initial state)

Some preliminary debugging suggests that the precomputation for Rmin in the parametric/exact engine is the most probable culprit.

lanechange_mdp.prism.txt
lanechange_dtmc.prism.txt

Status of CTMDP support

Hi @davexparker @kleinj , I wanted to know if there are any plans to make CTMDP support mainstream. I can see a CTMDPModelChecker with bounded reachability implemented, but I couldn't find a way to invoke this through the CLI/GUI. The parser doesn't seem to support files starting with ctmdp either. Is there some hidden way in which one can input CTMDPs and do stuff on it? As far as I understand, CTMDP class extends MDPSimple and distributions store the rates instead (correct me if I'm wrong). It doesn't seem too hard to somehow write a CTMDP model file like one writes a MDP model file and parse it. Is it worth the effort to go this way?

Silent cast errors.

Currently, integer overflows and underflows are silent errors during expression evaluation. This may lead to models that are broken without a chance for the author to recognise the error. See PR #91 .

More flexible global writes for synchronizing actions

Currently, PRISM only allows writes to global variables from commands with internal ([]) actions, not with named actions ([a]). This ensures that there can be no conflicting writes, as the internal actions can never synchronize with some other action.

However, it is often useful to have more flexibility. As part of our work on variable reordering, we developed a relaxed version that allows global variable writes as long as the following condition holds: For each global variable, writes using named actions occur only from a single module. A backport of that relaxed version to a recent-ish trunk can be found at https://github.com/kleinj/prism/tree/flexible-global-writes.

The tricky parts of the implementation are the symbolic parallel composition, as there we have to remove (using existential quantification) the default identity assignment to the global variables for the commands that don't actually write to it.

It might make sense to have a look and see how difficult it would be to be even more relaxed, as it should be safe to only require that for each pair (global variable, action name) there is at most one module with a corresponding command writing to the the global variable. However, this might get a bit tricky in the presence of the more general process algebra composition operators.

Note that this approach does not take the command guards into account, i.e., it would be safe to have multiple global writes in commands with the same action, if the guards don't have any overlap. To allow such writes, I guess it would be required to mark the global variable writes using extra BDD variables in the encoding of the transtition function and then check during reachability analysis whether a conflicting write occurs in the reachable state space.

explicit/SCCComputerTarjan.tarjan() crashes to stack overflow for larger models

Hi.
When using the explicit engine to compute steady-state probabilities for this model, stack overflow exception is thrown and the computation fails, while the GUI gets stuck and pretends the computation continues.

dtmc //random walk along a path

const int qCapacity = 50000;

module main

qSize : [0..qCapacity] init 0;

[step] (qSize < qCapacity & qSize != 0) -> 0.5 : (qSize' = qSize+1) + 0.5 : (qSize' = qSize-1);
[step] (qSize = 0) -> (qSize' = qSize+1);
[step] (qSize = qCapacity) -> (qSize' = qSize-1);

endmodule

I did not explore this bug in much depth, nor I understand this algorithm.
But from what I can tell, the reason is as follows.
SCCComputerTarjan attempts to explore all successors of the initial state recursively.
Each time the computation tries to explore a deeper successor, function tarjan(int i) gets called recursively on top of the previous one, which is apparent from the code.

	private void tarjan(int i) throws PrismException
	{
		...
		SuccessorsIterator it = model.getSuccessors(i);
		while (it.hasNext()) {
			...
			if (n.index == -1) {
				tarjan(e); //#THE RECURSIVE CALL##################################
				...
		}
		....
	}

For queue-like models with many states, it can easily happen that the recursion is so deep that stack overflow occurs before all states are explored. Because of this, it has happened to me that similar queue-like models with as few as 5000 states are already too much to handle.

Thanks!

Running prism-test (prism-auto -tm) on Windows/Cygwin

Running prism-test (or, equivalently, prism-auto -tm) on Windows/Cygwin fails like this:

Error: Couldn't open log file "/tmp/tmpO3RYpV".

This is because, in test mode, prism-auto creates a temporary file and passes it to the prism executable via the -mainlog switch. On Windows, subprocesses cannot access temporary files created in Python with tempfile.NamedTemporaryFile(delete=False). There is some info and possible solutions here:

http://stackoverflow.com/questions/15169101/how-to-create-a-temporary-file-that-can-be-read-by-a-subprocess/15235559#15235559

but I have not managed to get anything to work yet.

GUI: Discoverability of possible user actions

Having had a chance to look over the shoulders of several first time PRISM users, one small stumbling block in the beginning is the discoverability of possibly user interactions.

  1. The tabs at the bottom, switching between model editor, properties editor, simulator and log take some time to find.
  2. For the properties editor, adding the first property and adding a constant or label (via right click popup menu) seems to be difficult.
  3. The Simulate option in the property pop-up is worded a bit confusingly; it could perhaps be made clearer that this refers to statistical model checking and not to simulating a path with the property taken into account.

Some ideas re 1:

  • make the tabs larger, more noticable? Bold the Log tab, if there is new output?
  • add Switch to model view, ... actions to the corresponding menus, as the first action? Go to model?
  • move the tabs to the top?

Some ideas re 2:

  • Visually, the constants and labels tables don't look editable (just gray background). Perhaps use the white background of the properties table? Add a grid?
  • Allow double-click on the constant or lables table to add?
  • Provide a tool-tip that suggests right-clicking?
  • Have, as the last line of the table, a small + button for adding?
  • Have, in the rule box with Properties, Constants, ... a small + button for adding? Not sure if that's technically possible, but would save screen space.

Location of configuration file on UNIX-like systems

Consider using the the configuration file location as described in the XDG Base Directory Specification as a default on UNIX-like systems. Configuration files as a hidden file in the user directory are annoying and inconsistent with many other applications.

$XDG_CONFIG_HOME defines the base directory relative to which user specific configuration files should be stored. If $XDG_CONFIG_HOME is either not set or empty, a default equal to $HOME/.config should be used.

DTMCUniformisedSimple.getDeadlockStates() crashes to NullPointerException

Hello!
These are the only two constructors for class DTMCUniformisedSimple. Neither has a initialise() call.

        public DTMCUniformisedSimple(CTMCSimple ctmc, double q)
	{
		this.ctmc = ctmc;
		this.numStates = ctmc.getNumStates();
		this.q = q;
		numExtraTransitions = 0;
		for (int i = 0; i < numStates; i++) {
			if (ctmc.getTransitions(i).get(i) == 0 && ctmc.getTransitions(i).sumAllBut(i) < q) {
				numExtraTransitions++;
			}
		}
	}

	public DTMCUniformisedSimple(CTMCSimple ctmc)
	{
		this(ctmc, ctmc.getDefaultUniformisationRate());
        }

Without a call to initialise(), attributes such as protected TreeSet<Integer> deadlocks; remain null.
DTMCUniformisedSimple does not override getDeadlockStates(), so getDeadlockStates() returns null. For example, constructing DTMCSparse from DTMCUniformisedSimple using this constructor crashes due to a NullPointerException.

	public DTMCSparse(final DTMC dtmc) {
                ...
		initialise(dtmc.getNumStates());
		for (Integer state : dtmc.getDeadlockStates()) {
			deadlocks.add(state);
		}
                ...
	}

It seems that the best solution would be to add implementation of getDeadlockStates() to DTMCUniformisedSimple so that it overrides the one from ModelExplicit, because that one causes the NullPointerException.

Performance: init ... endinit block for explicit engine

When using the init ... endinit method of specifying multiple initial states, the state space generation in the explicit engine can be quite slow. Currently, all possible valuations of the state variables are generated, i.e., the full Cartesian product, and each of them is checked against the init expression to determine whether it is an actual initial state (simulator.ModulesFileModelGenerator.getInitialStates()). If the potential state space is quite large, this becomes very expensive, even though potentially only a small fragment of the state space is actually reachable. Clearly, it is a hard problem for the explicit engine to generally find all the satisfying variable assignments, but it would be nice if the non-hard cases would be handled more efficiently.

Some ideas for possible approaches:

  • Detect common, "simple" init expression patterns (e.g., conjunction of simple comparison expressions) and handle those separately.
  • Exploit the symbolic MTBDD engine by translating only the init expression into a BDD and then iterate over the satisfying variable assignments to obtain the initial states.
  • Generating the satisfying assigments via some kind of structural induction? There, it would probably be good to transform the expression into positive normal form, i.e., with no more boolean negation operator.

Non-deterministic bug in the parser?

Hi, I have a PTA model with two transitions - see in particular the two lines at the end marked with XXX (one is an assignment of two variables, the other true)
(llstate'=slRefused) & (u_state'=u_sFinal); // line XXX 1
// true; // line XXX 2
endmodule

If I try to check a property it crashes with an exception in the parser.
prism bug-pta.prism bug-pta.props -prop 1 -const T=5
Now, if I load this in xprism, comment out line XXX 1 and uncomment line XXX 2, it works.
If I now comment out line XXX 2 and uncomment line XXX 1 (thus getting the model that was crashing before), it also works!
Is Prism trying to teach me to say "please" before asking it to do something? :-)

I'm attaching the model & prop files, along with my ~/.prism.
$ prism -version
PRISM version 4.3.1.dev
$

Any help with this bug would be more than appreciated!

Best,
Christos

non-det-bug.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.