I can solve a system of time-dependent, General Form PDEs using MUMPS as a direct solver on an interval [0,T] where T>0 is given. The adaptive timestep decreases in the course of the solving process (I tend to initialize with a smaller than necessary t_init) to some value dictated by the mathematics of the problem. However, when I change the solver to PARDISO, the timestep stays constant and then grows without bound ... aside from the initial timestep, all solver parameters are the default ones for both MUMPS and PARDISO. Does anyone knows what the issue might be? I need to use PARDISO to take advantage of multiple cores in a single processor.
↧