Just curious how people generally do run control or queuing when conducting multiple analysis runs for evaluation of sensitivity/uncertainty, or dynamic inputs, etc. Our most common situation is earthquake analysis where we are running the same model for many input ground motions. We typically produce a static save file that we then load up and analyze with a simple command-line queuing process. We call a dynamic run-file (or set of files) that will complete each analysis in series. We set up several of these to run in parallel in order to speed up the process to the extent we can. We are nearly always running this process from the command-line instead of the gui.
A few problems I’ve noticed with this are:
-
The program sometimes won’t respond to an escape command to halt during the solve loop. In this case you have to issue a break (ctrl+c) which causes the licensing to temporarily become unavailable. It seems to resolve itself in a few minutes but is a bit of an issue if you realize you need to reorder the queue or view plots during a run, etc.
-
There seems to be a limit on save name character length (including directory name), so if you set up a queue of analyses and realize partway through that the save names are getting cut off, they could potentially overwrite each other.
-
It can be challenging to keep track of what has been completed if you don’t have a log file of some sort.
Are there any plans to develop a separate queuing or run control module? Perhaps something that can put commands into a stack that can be manipulated either in the gui or on the command line? Any best practices by others for analysis control using fish?