[firedrake] hybridisation and tensor-product multigrid
Eike Mueller
E.Mueller at bath.ac.uk
Mon Mar 16 17:51:27 GMT 2015
Hi Colin,
thanks, sounds like it does make sense if in my code I write a generic
class for locally (cell-wise) assembled matrices which is build around
the same principles as the already existing banded matrix class which I
used for columnwise assembled forms.
That class will take as input two function spaces (the to-space and the
from-space) and then on each element of the 3d grid it stores a matrix
which is the locally assembled operator (I would represent this as a
matrix-valued DG0 space on the extruded mesh). That class then also has
a method which takes as input a UFL form and locally assembles this into
that matrix defined on each cell.
If I add functionality for multiplying, adding and inverting these
matrices, and for applying them to fields, then I should have all I need
for building the hybridised elliptic operator.
When multiplying those matrices, I have to be careful to only do this if
it is allowed by the continuity. E.g. for operator A mapping from a CG
space to a DG space, and operator B mapping from a DG space to a CG
space, the product B*A is allowed (in the sense that I can get the local
matrix representation of B*A by multiplying local matrices of B and A
cellwise), but A*B is forbidden.
Does that sound sensible?
To build the columnwise smoother requires another columnwise data
structure, which is used for assembling part of the hybridised operator
(the couplings between the dofs on the horizontal facets), but maybe we
should get it to work in the isotropic case first.
Eike
On 16/03/15 13:25, Colin Cotter wrote:
> Hi Eike,
> If you take a look at the test_hybridisation_inverse branch, in
> tests/regression/test_hybridisation_schur, you'll see a hacked up
> attempt at doing this for simplices. It's a bit fiddly because you need
> to assemble the form multiple times, once as a mixed system and once as
> a single block, so I'm thinking of making a tool to automate some of
> this by doing automated substitutions in UFL. Lawrence and I said we
> might try to sketch out how to do this.
>
> Another slight problem is that we don't have trace elements for
> quadrilaterals or tensor product elements at the moment. Our approach to
> trace spaces is also rather hacked up, we extract the facet basis
> functions from an H(div) basis and the tabulator returns DOFs by dotting
> the local basis functions by the local normal.
>
> Andrew: presumably you didn't implement them because you anticipated
> some fiddliness for tensor-products?
>
> cheers
> --cjc
>
> On 16 March 2015 at 08:49, Eike Mueller <E.Mueller at bath.ac.uk
> <mailto:E.Mueller at bath.ac.uk>> wrote:
>
> Dear firedrakers,
>
> I have two questions regarding the extension of a hybridised solver
> to a tensor-product approach:
>
> (1) In firedrake, is there already a generic way of multiplying
> locally assembled matrices? I need this for the hybridised solver,
> so for example I want to (locally) assemble the velocity mass matrix
> M_u and divergence operator D and then multiply them to get, for
> example:
>
> D^T M_u^{-1} D
>
> I can create a hack by assembling them into vector-valued DG0 fields
> and then writing the necessary operations to multiply them and
> abstract that into a class (as I did for the column-assembled
> matrices), but I wanted to check if this is supported generically in
> firdrake (i.e. if there is support for working with a locally
> assembled matrix representation). If I can do that, then I can see
> how I can build all operator that are needed in the hybridised
> equation and for mapping between the Lagrange multipliers and
> pressure/velocity. For the columnwise smoother, I then need to
> extract bits of those locally assembled matrices and assemble them
> columnwise as for the DG0 case.
>
> (2) The other ingredient we need for the Gopalakrishnan and Tan
> approach is a tensor-product solver in the P1 space. So can I
> already prolongate/restrict in the horizontal-direction only in this
> space? I recall that Lawrence wrote a P1 multigrid, but I presume
> this is for a isotropic grid which is refined in all coordinate
> directions. Again I can probably do it 'by hand' by just L2
> projecting between the spaces, but this will not be the most
> efficient way. Getting the columnwise smoother should work as for
> the DG0 case: I need to assemble the matrix locally and then pick
> out the vertical couplings and build them into a columnwise matrix,
> which I store as a vector-valued P1 space on the horizontal host-grid.
>
> Thanks a lot,
>
> Eike
>
> --
> Dr Eike Hermann Mueller
> Lecturer in Scientific Computing
>
> Department of Mathematical Sciences
> University of Bath
> Bath BA2 7AY, United Kingdom
>
> +44 1225 38 6241 <tel:%2B44%201225%2038%206241>
> e.mueller at bath.ac.uk <mailto:e.mueller at bath.ac.uk>
> http://people.bath.ac.uk/__em459/ <http://people.bath.ac.uk/em459/>
>
> _________________________________________________
> firedrake mailing list
> firedrake at imperial.ac.uk <mailto:firedrake at imperial.ac.uk>
> https://mailman.ic.ac.uk/__mailman/listinfo/firedrake
> <https://mailman.ic.ac.uk/mailman/listinfo/firedrake>
>
>
>
>
> _______________________________________________
> firedrake mailing list
> firedrake at imperial.ac.uk
> https://mailman.ic.ac.uk/mailman/listinfo/firedrake
>
--
Dr Eike Hermann Mueller
Lecturer in Scientific Computing
Department of Mathematical Sciences
University of Bath
Bath BA2 7AY, United Kingdom
+44 1225 38 6241
e.mueller at bath.ac.uk
http://people.bath.ac.uk/em459/
More information about the firedrake
mailing list