You are here

Optimized conflict resolution

DefaultProceduralModule6 has support for pluggable components that influence conflict set generation, production instantiation, and instantiation selection. This enables one to optimize some of the most expensive parts of cycle execution without regard to the underlying storage mechanism. By default these methods are fairly stupid. The selector just takes the first one (assuming they are sorted by utility). The instantiator just attempts a blind instantiation of all possibilities given the current buffer contents. The conflict set generator simply grabs all the potentially relevant productions (based on the chunktypes in buffers).

Given that performance profiling has shown that instantiation is actually the most expensive phase (given that so many productions will fail), it was an obvious target for some optimizations. The org.jactr.extensions.cached.procedural.CachedProductionSystem tracks and caches instantiation failures. When the condition that resulted in the failure changes, the cache is invalidated, and the next time the production is in the conflict set, it will attempt to instantiate it. If at instantiation, there is a cached failure for that production, it is returned instead. All of this was done by just providing a new IProductionInstantiator.

How much of a performance improvement can you get? Well, that depends on nature of your model, but a ballpark figure is anywhere between 35-90% of an improvement in the real cycle time. However, if you're model has very few productions or is retrieval bound (i.e., you go from retrieval to retrieval, more so than other productions), you'll be on the lower end. We have yet to see a circumstance where the overhead results in slower runs.

How can you tell? Turn on profiling.

Poor-man's Profiling

jACT-R has some basic performance profiling built in. To enable it, add -Djactr.profiling=true to the VM arguments in your run configuration. Be sure to remove all instruments and IDE tracers when running. When the model finishes running, you'll see a print out like this:


Total actual processing cycles 13057

Simulated processing cycles 13057

Total actual time 13.460383s

Simulate time 499.0947763613296s

Average sleep time (wait for clock) 0.26255740215976103ms

Average time processing events 0.07167572949375814ms

Average production cycle time 0.7668560925174236ms

Average production time + waits 1.0308940032166656ms

Realtime factor 37.07879459011899 X

If you enable the CachedProductionSystem (more on that in a bit), you can expect something like this:


Total actual processing cycles 13163

Simulated processing cycles 13163

Total actual time 10.886717s

Simulate time 503.54204421495893s

Average sleep time (wait for clock) 0.312651371267948ms

Average time processing events 0.06956552457646432ms

Average production cycle time 0.5131492061080302ms

Average production time + waits 0.8270695889994683ms

Realtime factor 46.25288268400463 X

Of particular interest is the change in average production cycle time. This includes conflict set assemble, instantiation, selection, and the posting of new events. The improvement here was only 35%, but that is because this particular model has many productions that compete (on average each cycle has 3 productions) - the fewer competing productions you have, the bigger your performance improvement.

Using Extension

The CachedProductionSystem is included in the default bundle. To enable it, include the following in your extensions block (which is after the modules block):

     <extension class="org.jactr.extensions.cached.procedural.CachedProductionSystem">
          <parameter name="EnableCaching" value="true" />
          <parameter name="ValidateInstantiations" value="false" />


When you first use it, set ValidateInstantiations to true. This will perform the normal caching operations and still attempt to instantiate the production. If there is a discrepancy, an error will be logged. This is just to verify that it does work for all cases, until a more formal test can be devised (at which time this code will be rolled into the main distribution).