Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Python also has the ability to split work across multiple cores. Some sample code below on how we run multiple dynamic simulations on the same machine in parallel. Be careful on how many workers you add. The problem can become I/O with all the data from the multiple runs and running 4 in parallel will not be 400% faster than running 1. You get a diminishing rate of return the more cores you add. With this code, you can easily play around with how many cores you want to use and find the optimal number for your machine.

Sample code:

import multiprocessing from multiprocessing import Process from multiprocessing import Pool

        workers = multiprocessing.cpu_count() - 1 # Run on N-1 cores on local machine
        p = Pool(workers)
        p.map(run_test, tests) #

Python also has the ability to split work across multiple cores. Some sample code below on how we run multiple dynamic simulations on the same machine in parallel. Be careful on how many workers you add. The problem can become I/O with all the data from the multiple runs and running 4 in parallel will not be 400% faster than running 1. You get a diminishing rate of return the more cores you add. With this code, you can easily play around with how many cores you want to use and find the optimal number for your machine.

Sample code:

import multiprocessing
from multiprocessing import Process
from multiprocessing import Pool

Pool
 workers = multiprocessing.cpu_count() - 1 # Run on N-1 cores on local machine
p = Pool(workers)
p.map(run_test, tests) # run_test is a function you call in parallel, tests is an array of data that gets passed to the function (could be case name, snapshot, fault name, etc...)

def run_test(test):
     p = Pool(workers)
        p.map(run_test, tests) #
# Run you stability run here...