Options

The options are catagorised according to what they affected below.

Compiler

option compiler.check_for_immediate_recursions
Default:On

If true, FDR will check to see if any processes of the form P = P [] STOP are defined. If this option is not set then such a process will cause FDR to fail at runtime, generally with a segmentation fault. If you know that, by construction, no such processes can be contained within your file then this option can be disabled to reduce the time required to compile processes. Note that this will only have an impact on huge processes, that contain millions of names. On anything smaller the different will be negligible.

option compiler.leaf_compression
Default:sbisim

The compression that is used for compressing arguments of high-level machines. This may either be none, indicating that no compression should be used, or sbisim or wbisim, to indicate that the strong or weak bisimulation compression function should be used, respectively.

option compiler.recursive_high_level
Default:On

If true FDR will compile processes of the form P = Q ; P in an optimised format. It is recommended that this is not disabled, unless it is producing poor results on such a process.

option compiler.reuse_machines
Default:On

If true FDR will, at the cost of increased memory usage, re-use named state machines in different assertions. If the same state machine is not being used in more than one assertion then it may make sense to disable this as it will decrease memory use.

Functional Language

option cspm.evaluator_heap_size
Default:A default value chosen by FDR.

This option allows for more memory to be allocated for the evaluator up front and can be used to improve performance in some cases. If the value is suffixed by K, M, or G, it is a size in kilobytes, megabytes, or gigabytes, respectively. Otherwise, it is a size in bytes.

Warning

This option will not take effect until FDR is restarted.

option cspm.profiling.active
Default:Off

If set to On, then simple profiling statistics are collected that detail how many times each function has been called, and by which other functions. For further details on profiling see Profiling. Note that turning on this option will reduce the performance of the evaluator and therefore, this option should only be kept activated for as long as necessary.

Warning

This option will not take effect until load or reload is executed.

option cspm.profiling.flatten_recursion
Default:On

This option affects how profiling statistics are reported (and is therefore only relevant assuming cspm.profiling.active is On). If this option is Off, then recursive functions, such as:

f(0) = 0
f(x) = f(x-1)

will result in a hierarchy that is as deep as the longest possible chain of recursive calls, causing profiling to be slower and making the data more accurate, but potentially more difficult to interpret. See Profiling for further details.

Warning

This option will not take effect until load or reload is executed.

option cspm.runtime_range_checks
Default:On

If Off, then FDR will not check to see if the values that are dotted with channel or datatype constructors are in the defined sets. For example, consider the following script:

channel c : {0..1}
datatype X = Y.{0..1}

If this option is On, then FDR would throw an error if c.2 or Y.2 were evaluated. Turning this option off will suppress these errors.

This option should only be used for performance reasons in circumstances in which you are confident that it would be impossible for this error to be produced. In particular, whilst turning this option off will never cause FDR to emit further errors, it is possible to construct processes that are incorrect. For example, in theory, hiding Events should result in a process that can perform no visible events, but (c.2 -> STOP) \ Events is equivalent to c.2 -> STOP, since c.2 is not a valid event and is therefore not in Events.

Warning

This option will not take effect until load or reload is executed.

Graphical User Interface

option gui.close_windows_on_load
Deafult:Off

If set to On, then whenever a load or reload command is executed, all windows relating to the current session will be closed.

option gui.console.history_length
Default:100

The number of history entries to keep in the command prompt. Any integer strictly greater than 0 is permitted for this option.

Refinement Checking

option refinement.bfs.workers
Default:The number of available cores.

The number of workers to use for a refinement check that is using breadth first search. This may be set to any value strictly greater than 0. If 1 core is selected then the algorithm used is identical to the algorithm of FDR2 (at least conceptually). If more than 1 worker is required then a parallelised version of breadth first search is used, as explained in Refinement Checking.

option refinement.cluster.homogeneous
Default:false

If set to true, FDR will assume that the nodes in the cluster are homogeneous (i.e. each node has the same number of cores of the same speed), rather than trying to calculate the speed of each machine.

If you are operating on a homogeneous cluster, it is strongly recommended that this flag should be set, as otherwise the cluster may end up being imbalanced.

option refinement.desired_counterexample_count
Default:1

The number of counterexamples that FDR should attempt to find during a refinement check. Note that FDR may not be able to find this many counterexamples, but will continue to search until it has either found the desired number, or until every state pair has been explored.

See Uniqueness for further details on what FDR considers to be a unique counterexample. See Debug Viewer for a description on how to view the different counterexamples.

option refinement.explorer.storage.type
Default:A default value chosen by FDR.

The data structure used for storing states to visit on the next ply of the breadth first search. The permitted values are:

Default
Indicates that FDR should choose a storage structure. Presently, the default value is always a BTree, but at some point in the future it may be auto-selected depending on the problem.
BTree
Use a BTree to store the states. This uses memory proportional to the number of node pairs stored, but with an overhead of at least 8 bytes per node pair stored.
LSMTree
Use a Log-Space Merge Tree. This tree should perform only slightly worse than the BTree in the worst case, but may be much faster on some examples, particularly those that incorporate machines that have relatively random transition systems. This has essentially no storage overhead, but can end up storing multiple copies of the same node pair. However, the number of duplicate copies is bounded by log of the number of stored node pairs.
option refinement.storage.compressor
Default:A default value chosen by FDR.

The type of compressor to use for the block-based storage used during refinement checking.

Default
Lets FDR chose a compressor. Currently, this always defaults to LZ4.
DefaultHigh
Lets FDR chose a compressor that will result in a high compaction ratio. Currently, this defaults to Zip. This can be useful on checks that exceed the amount of RAM available.
LZ4
Compresses the data using the LZ4 compression algorithm. This generally reduces the storage requirements to 0.3 of the original, whilst having no noticeable impact on checking time.
LZ4HC
Compresses the data using the LZ4HC compression algorithm. This generally reduces the storage requirements to around 0.25 the original requirement, but halves the number of states that can be checked per second compared to LZ4.
Zip
Compresses the data using the Zip compression algorithm. This generally reduces the storage requirements to around 0.2 of the original requirement, but will reduce the number of states that can be checked per second to around three quarters of the number that can be checked per second using LZ4.
option refinement.storage.file.path
Default:None

If set to a comma-separated list of writeable directory paths FDR will use on-disk storage to store data during refinement checks, rather than relying on the system’s swap implementation. In particular, FDR will still make use of RAM to cache data, but when the cache is filled, FDR will evict data to files stored in the directory as set above. The size of the in-memory cache can be set using refinement.storage.file.cache_size. For example, setting this to /scratch/disk1,/scratch/disk2 will cause FDR to write data to both folders.

This option is only useful if the amount of memory required to complete a check exceeds the available RAM. In such cases, setting this option to a valid directory path typically increases performance. Further, it allows FDR to allocate more memory than is available to the system (which can only use RAM and swap).

For maximum performance this directory should point to a location on a solid-state drive (SSD). Use of traditional disk drives is not recommended since FDR is not optimised for such cases.

When this option is selected it may be necessary to increase the maximum number of files that are allowed to be simultaneously open. On large checks that consume several hundred gigabytes of storage, FDR will easily require several thousand files to be simultaneously open. The required number of files can be estimated by dividing the number of megabytes of storage required for the check by 64 (which is the file size FDR uses by default). To adjust the maximum number consult the documentation for your Linux distribution (searching for “set maximum number of open files” generates useful results). FDR checks at runtime if this value is sufficient for operation.

See also

refinement.storage.file.cache_size
Allows the amount of memory that FDR uses for the in-memory cache to be set.
refinement.storage.file.checksum
If set, FDR will verify data that is written to the disk.
refinement.storage.file.compressor
Allows extra compression to be applied to blocks written to disk, thus minimising the amount of on-disk storage required.
option refinement.storage.file.cache_size
Default:90%

If file based storage is being used (see refinement.storage.file.path), this specified the amount of memory to use before evicting data to disk. This can be specified either as a percentage of the available system RAM at the point the refinement check starts, or as an absolute number of bytes. For example, specifying 50% will cause FDR to use 50% of the remaining system RAM at the point the check starts, whilst 100GB would cause FDR to use 100GB of RAM for the cache. KB, MB, GB and TB may be used as suffixes to specify the amount of RAM to use for the cache.

For maximum performance this value should be set as high as possible, but must not be set so high that FDR would start to use swap for its in-memory cache.

option refinement.storage.file.checksum
Default:Off

If file based storage is being used (see refinement.storage.file.path) and this option is set to On, this will cause FDR to verify data that is read from disk. This can be useful as a guard against disk corruption. If FDR detects a corrupted block it will abort the check (there is no way to simply revisit the affected states, unfortunately). This causes a small increase in runtime, generally around 5%.

option refinement.storage.file.compressor
Default:None

If file based storage is being used (see refinement.storage.file.path) and this is set to a value other than None, this specifies an additional type of compression to apply to blocks that evicted to disk. This can be useful to minimise the amount of disk storage that is being used, but does result in the time to check a property increasing by anywhere between 5 and 50% (depending on the problem).

This may be set to any of the values that refinement.storage.compressor can be set to, in addition to the value None.

option refinement.track_history
Default:On

This option controls whether FDR will record information that enables it to reconstruct traces if a counterexample is found. If this option is disabled, FDR will require less memory (up to 50% less), but will not be able to report any counterexamples. This option should only be used if the check passes, since FDR will not provide any information about why the check fails if the assertion does not pass.