[Nektar-users] memory usage and computation time in Taylor-Green problem

Sherwin, Spencer J s.sherwin at imperial.ac.uk
Sun Mar 25 19:09:32 BST 2018


Dear Vishal,

Sorry for not getting to your comments earlier. Can I first confirm you have indeed compiled the code in parallel since what you describe sounds as if the code may have been compiled in serial and is running 112 versions of the case?

To turn on parallelisation you need to enable the NEKTAR_USE_MPI in the cmake step.

Best regards,
Spencer.


On 19 Mar 2018, at 17:21, Vishal Saini <vishal.saini.nitj at gmail.com<mailto:vishal.saini.nitj at gmail.com>> wrote:

Hi all,

After obtaining exciting error vs. computational cost results from relatively heavy 2D computations, I've been setting up the 3D Taylor Green vortex (TGV) problem using Nektar++ Incompressible solver. The Reynolds number used is 1600 (same as in the TGV tutorial from Nektar++ team). A 3D grid of 64^3 elements with NumModes=5 was used to obtain a 256^3 simulation (close to a DNS). I ran this problem on Midlands+ Tier 2 machine using 4 nodes (112 cores, 512 GB RAM in total).
http://www.hpc-midlands-plus.ac.uk/about/system-description/

The memory consumption during the run was ~450GB and it took ~330 wall-clock-minutes for 1000 time-steps (with dt=1e-4). For a reference, I ran a 256^3 TGV simulation of the same case in OpenFOAM using same resources. The memory consumption was ~40GB and it took ~20 minutes for 1000 time-steps.

So for this case, I observe high resource consumption in terms of memory and computation time. I was not expecting this on the basis of my experience with Nektar++ on 2D simulations, at least in terms of computation time.

Is this normal? In addition to my set-up file, I'm suspecting my installation of Nektar++ on the aforementioned cluster. Could anyone please try and run a few hundred time-steps on their machines using the files attached? FYI, I'm partitioning on-the-fly. The calculation seems to scale well on 8 nodes in terms of wall clock time while increasing the memory consumption by ~10%.

Any input is highly appreciated.

Best regards,
Vishal


---
Vishal SAINI
Master of Research,
University of Cambridge.
<nektar_user_tgv.tar.gz>_______________________________________________
Nektar-users mailing list
Nektar-users at imperial.ac.uk<mailto:Nektar-users at imperial.ac.uk>
https://mailman.ic.ac.uk/mailman/listinfo/nektar-users

Spencer Sherwin FREng, FRAeS
Head, Aerodynamics,
Professor of Computational Fluid Mechanics,
Department of Aeronautics,
Imperial College London
South Kensington Campus,
London, SW7 2AZ,  UK
s.sherwin at imperial.ac.uk<mailto:s.sherwin at imperial.ac.uk>
+44 (0)20 7594 5052
http://www.imperial.ac.uk/people/s.sherwin/

-------------- next part --------------
HTML attachment scrubbed and removed


More information about the Nektar-users mailing list