User Guide: DSim Constraint Solver Options

Modified on Thu, 11 Jul at 11:43 AM

User Guide: DSim Constraint Solver Options

The following apply to solving of SystemVerilog constraints.


Name Description
-cs-use-bdd Use BDD solver
-cs-use-sat Use SAT solver
-no-cs-range Disable range analysis during constraint solving
-cs-array-max n Sets n as the max value for the randomized array.size (default 1000000)
-cs-randc-max n Set n as the max number of bits in randc variables (default 16)
-try-all-soft-first Try all soft constraints enabled up front.

General Approaches

The constraint solver iteratively executes two steps to solve a constraint set:


  • Algebraic simplification of expressions involving only state variables; expansion of conditional and foreach constraints.
  • Bidirectional constraint solving: synthesizing the constraint set into a boolean logic representation and using a boolean logic solver to find the solution.

There are two boolean logic solvers available:


The BDD solver uses reduced-order binary decision diagrams (BDD) to build a graph representing all possible solutions to the constraint problem. A random walk is then taken to select a solution. This approach has the advantage that each possible solution is equally likely to be selected. However, the BDDs are time-consuming to construct, and run time may not be acceptable for all but the most trivial problems.


The SAT solver recodes the constraint set into conjunctive normal form (CNF) and then uses a modified boolean satisfiability solver (SAT) to come to a solution. This approach scales to much larger problems. However, individual solutions do not necessarily have an equal probability of being selected.


The SAT solver is currently the default. However, the choice of solver can be forced: -cs-use-bdd to force use of the BDD solver, and -cs-use-sat to force use of the SAT solver.



Range Analysis

Range analysis is an optimization used to reduce the amount of work the logic synthesis engine must do. It attempts to infer valid ranges for each random variable, and given that, determines whether individual bits of the random variable must be constant, or equal to some other bit. e.g.


 

rand int a, b;
rand bit x;
constraint Q {
    a inside { [0:100] };
    b inside { [0:100] };
    x -> (a * b < 1000);
}

 

Without range analysis, a and b would be considered 32-bit integers by Verilog rules, so the multiplication would be a full 32x32 multiply. However, range analysis is able to infer that:


  • a can be recoded as {25'b0,a[6:0]}
  • b can be recoded as {25'b0,b[6:0]}
  • Since the condition x is more specific than any conditions that led to the inferences on a and b, the above recodes are valid for the multiplication.

As a result, the synthesis engine will build out a 7x7 multiply which will result in a much smaller BDD or CNF constraint set.


However, we have seen constraint sets where little gain is to be had from range analysis, and enough different conditionals were used that range analysis actually costs more time than it saves! Typically this happens with constraints of the form:


 

rand bit [7:0] insn_name;
rand bit [7:0] opcode;
constraint Q {
  insn_name == INSN_LD_IMM -> opcode == 8'hA9;
  insn_name == INSN_RET    -> opcode == 8'h60;
  insn_name == INSN_CALL   -> opcode == 8'h20;
  // etc.for 100+ opcodes
}

 

Each implication antecedent creates a new condition regime for range analysis to deal with, which slows down efforts to determine which set of inferences can be made at any given point.


For such pathological cases the -no-cs-range switch can be used to disable range analysis.



Bounds on Otherwise Unbounded Behavior

The -cs-array-max option is used to provide a default constraint for code such as the following:


 

rand int q[$];
rand int x;

constraint CC { q.size() == x; }

std::randomize(q);

 

Absent any other constraint, the size of the queue could be randomized to any arbitrary 32-bit value. Since each queue element occupies 8 bytes (4 for the value, 1 for the rand_mode bit per element, and 3 padding as necessitated by x86_64 ABI rules) you could easily have an array of up to 2 billion elements taking up 16GB of memory - assuming you have enough memory. Instead, the constraint solver imposes an arbitrary maximum size of a million elements; the -cs-array-max option allows the limit to be overridden.


The -cs-randc-max option is used to control behavior of code such as:


 

randc int x;
std::randomize(x);

 

Internally, randc works like dealing cards: a deck of cards is procured, shuffled, and a single card is dealt on each call to randomize. Once the deck is exhausted, it is shuffled anew.


There is one card for each possible value of the variable. For a 32-bit integer, we need 4 billion cards, and the shoe that holds this deck is 16GB in size. Again, doable (but slow!) if you have the memory. The option is used to limit the size of the deck in cases where code is being silly. The default value of 16 allows UVM to run properly.



Controlling Soft Constraints

Given a series of N soft constraints, the constraint solver enables each soft constraint one by one. If a solve is possible with a soft constraint left in, it will be left enabled, otherwise it will be disabled. N soft constraints requires N+1 iterations of the constraint solving algorithm to try all possibilities. This is inefficient if the soft constraints are meant to guide the solver and are not expected to cause a failure. For the latter scenario, the -try-all-soft-first option will first try a solve with all soft constraints enabled. If the solve succeeds, then no more work need be done. Otherwise, the soft constraints are enabled one by one in priority order as before.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article