Data sciences and scientific computing alike heavily rely on the approximation of large-scale data in suitable low-dimensional subspaces that reveal the most significant features or parameters and thereby dramatically reduce the complexity of the data. Mathematical models represented by differential equations are solved numerically in the form of discretizations, which essentially are sequences of subspaces of increasing dimension and of improving approximation power that are enumerated by a discretization parameter. The choice of the discretization parameter is a tradeoff between the complexity and the accuracy of the numerical method. For particular classes of differential equations, specialized methods provide more efficient sequences of approximation subspaces, designed analytically (by hand), in order to improve the approximation power without increasing the complexity. In this talk, we will discuss how tensor-network approximation, based on the separation of variables and developed originally for the simulation of many-body systems, can be leveraged to construct efficient discretization subspaces adaptively in the course of computation and to extract and exploit the multilevel structure hidden in partial differential equations.
When and where?
22 October 2021 @ 12.00 CEST
On-site
University of Vienna, Kolingasse 14-16, 1090 Vienna @ Seminarraum 7, OG01
Room capacity: 17 people
- Registration requirements: mailto info.datascience@univie.ac.at
- Participation requirements: access test will be checked at the entrance of Seminarraum 7 (accepted types of evidence: vaccinated, tested or recovered)
Online
Zoom
https://zoom.us/j/94292120386?pwd=UlU5bFJLYXZVYWFaT0hlc1huaEFVdz09
- Meeting-ID: 942 9212 0386
- Passcode: DStalks21