Mathematical modelling acoustic measurements

In my last blog post , we got an overview on the origins of sound while highlighting the motivations for the ESR02 subproject within ROMSOC. (TLDR : Its to develop coupled mathematical models to compute the influence of turbulent fluid and porous media on a sound measurements). This time around we focus on simplifying the scope and complexities accounted in a target mathematical model for sound measurements. Measuring sound provides valuable insights in engineering tasks. The knowledge of acoustic characteristics of materials and structures obtained apriori is valuable to designers and architects. Measurements are also necessary to enhance user experience in loudspeakers etc. In product maintenance and diagnostics, e.g of a vehicle, sound signal provides information on health and performance, and is constantly monitored. But how are these measurements performed? Typically, a simplified measurement procedure involves, a receiver, (or sensor) which captures the sound signal. Receivers come with different accuracy, sensitivity and range in sound level measurements. It is also packaged differently to suit the environment. Knowing the measurement environment is extremely important, since enclosures cause reflections of incident sound signals and also induce it’s own resonance. External noise may also deter accuracy and in some cases, there also are chambers designed specifically to prevent external noise from impacting measurements (anechoic chambers). Information about sound source is also to be taken into account. A distribution of sources, e.g a set of loudspeakers can have interference patterns, while a moving source (relative to the receiver) exhibits a shift in frequencies, known as the Doppler effect.

Figure 1: Loudspeaker measurement in an anechoic chamber (Wikipedia)

Recreating such an entire procedure on a computer involves replicating the entire setupdigitally – making it necessary to encode the physical process into mathematical equations. The French mathematician, Jean D’Alembert (1717-1783) is credited with arriving at the concise equation describing the motion of a sound wave (in 1D) through a medium,

The wave equation is a well-known partial differential equation and is of great importance in various applications. The equation has multiple solutions most of which maybe physically unfeasible and need additional information from the domain of interest to completely describe the motion of sound. This is where models for sound sources and environmental conditions are factored. The image below illustrates the generation of a spherical wave from a point-sized source at the center of the rectangular domain.

Figure 2: Spherical wave from point source (Wikipedia)

While the wave propagation model above explains the sound propagation in sparsemedium like air, it is complex in porous layer media. Models on acoustical behavior in porous materials were presented only in the 1950s by Zwikker, Kosten and Biot followed until the present by more detailed models. Sound transmission through porous materials are distorted severely influenced by the porous micro-structures and the fractional volume of the air. It is being subjected to viscous and thermal resistances, while also being deferentially transmitted through the fluid within the media and the porous frame. A detailed acoustical model replicating their effects need to consider the macroscopic effects on an incident signal and the choice of the right model depends on the type of porous material in a specific application.

Physical processes in the nature are often observed as a multitude of phenomena happening simultaneously and need simplification to quantify, analyze and predict them. A variety of models need to be sewn together to reproduce the phenomena in a computer accurately. The ROMSOC project aims to address this very issue through mathematical modeling and coupling. Follow our project blog for more posts in the topic.

About the author:
Ashwin Nayak is an early-stage researcher (ESR2) in the Reduced Order Modeling Simulation Optimization and Coupled methods (ROMSOC) project. He is affiliated with the Technological Institute of Industrial Mathematics (ITMATI), Spain and working in collaboration with Microflown Technologies B.V., Netherlands. His research aims to develop coupled mathematical models to enable acoustical measurements in fluidporous
media interactions.

How fast can we get for the European Extremely Large Telescope? Performance optimizations on real-time hardware

2020 started recently so it’s time to look into the stars again. I will continue with my last year’s blog post “A new year has started. Let’s look into the stars!” and share with you my current challenges at work that are all about “How fast can we get for the European Extremely Large Telescope?”

As already described in my first blog post, my research is dealing with adaptive optics and, in particular, with the problem of atmospheric tomography for the European Extremely Large Telescope (ELT). Because the atmosphere is changing rapidly this problem has to be solved in real-time. In our case, real-time means within two milliseconds. These requirement is quite challenging for the ELT, because the telescope is large which leads to a huge amount of data that has to be processed. The challenge I’m dealing with is how to optimize the performance of solving the atmospheric tomography problem.
The first decision you have to make when dealing with performance optimizations is which algorithm you want to use. Within my research, I have to solve a system of equations which is a common problem in many real world applications. You can either choose a direct solver or an iterative one. In contrast to the direct solver, an iterative approach starts with an initial guess and iterates until it reaches a solution that is good enough. In general, iterative solvers are better for very large and sparse systems. Moreover, you have the possibility to use an iterative solver in a matrix-free fashion, i.e., avoid to store all the matrix entries which saves a lot of memory.
If you want to improve the performance of an existing algorithm you usually try to parallelize, i.e., execute steps in parallel. For example, when adding two vectors of length n every entry is completely independent from the others. Thus, you can execute the n additions in parallel and save a lot of time. For more complex algorithms deciding where and how to parallelize becomes a quite challenging task.
Another non-negligible aspect is the right hardware. A topic in which mathematicians are in general not well versed. This is one of the nice things about ROMSOC. Half of my PhD I’m working at the company Microgate with specialists in real-time hardware. This gives me the possibility to look into various fields, not only mathematics, and improve my interdisciplinary skills. At the moment we are running our algorithm on the high end GPU NVIDIA Tesla V100 at Microgate as well as on a high performance computing cluster called Radon1 from RICAM in Linz.

Altogether, performance optimization is a hard task. You never know in advance if you can gain enough speed for your underlying algorithm which can become quit gruelling after a while. Nevertheless, it is something really important for practical applications and something many mathematicians do not pay enough attention.

About the author

Bernadett Stadler studied Industrial Mathematics at the Johannes Kepler University in Linz. Since May 2018 she is part of the ROMSOC program as a PhD student. She is involved in a cooperation of the university in Linz and the company Microgate in Bolzano. Her research addresses the improvement of the image quality of extremely large telescopes. The goal is to enable a sharper view of more distant celestial objects.

Model reduction for port-Hamiltonian systems.

Port-Hamiltonian systems are network-based models that are formed by decomposing a physical system into submodels that are interconnected via energy exchange. The submodles may arise from different physical domains, such as e.g. electrical, thermodynamic, or mechanical systems.   The  port-Hamiltonian structure is preserved under power conserving interconnection, and properties stability, passivity, as well as conservation of energy and momentum are encoded directly into the structure of the model.

When the  Interconnection of submodels leads to further constraints such  as Kirchoff’s laws or position and velocity constraints  in mechanical systems, then the appropriate model class is that of port-Hamiltonian descriptor systems. In the last year model reduction techniques for port-Hamiltonian descriptor systems have received substantial attention and major progress has been made in substantially reducing the model dimensions while fully preserving the constraints.  This has a major impact on the use of these models in practice. While the dimension of the dynamical part can be drastically reduced, the constraints are still valid, so a replacement of the full model by a reduced model in any  subcomponent will not violate the underlying physics and thus each subcomponent can be modelled as a modl hierarchy where depending on the application a detailed or a coarse model is used.

See the recent papers

C.A. Beattie, S. Gugercin and V. Mehrmann, Structure-preserving Interpolatory Model Reduction
for Port-Hamiltonian Differential-Algebraic Systems., 2019.

S. Hauschild, N. Marheineke, and V. Mehrmann,
Model reduction techniques for linear constant coefficient port-Hamiltonian differential-algebraic systems,, 2019. To appear in ‘Control and Cybernetics’.

For examples of controlled flow problems or mechanical multi-body systems.

About the author:
Professor Dr. Volker Mehrmann is full professor for Mathematics at Technische Universität Berlin and the president of the European Mathematical Society (EMS). Moreover, he is the coordinator of the ROMSOC project.

Alongside his activities for the EMS, he has been sitting on the boards of the International Council for Industrial and Applied Mathematics (ICIAM) and the EU-MATHS-IN initiative, which promotes collaboration between mathematics researchers and industry. Additionally he is a member of the German Academy of Science and Engineering (acatech) and acted as spokesperson for the Berlin research center for application driven mathematics MATHEON between 2008 and 2016.

He can look back on many years of active commitment to international exchange and the promotion of mathematics and its applications and seeks to improve communication and cooperation between mathematical disciplines as well as between mathematics and other sciences focusing also on the promotion of early-career researchers and women in mathematics.

His research interests are in the areas of numerical mathematics/scientific computing, applied and numerical linear algebra, control theory, and the theory and numerical solution of differential-algebraic equations. In recent years he focuses on the modelling, simulation and control of systems described by port-Hamiltonian differential-algebraic systems, which form an exciting and very promising new modeling paradigm.

Thermo-mechanical modeling of blast furnace hearth for ironmaking

Steel making is a very old process and has contributed to development of technological societies since ancient times. The previous stage to steelmaking is the ironmaking process, which is performed inside blast furnace. Blast furnace is a metallurgical reactor used to produce hot metal (95% Fe, 4.5%C) from iron ore. The burden contains iron ore, fluxes and coke. The process involves an exothermic reaction, the gasification of carbon. High temperature inside the blast furnace is required to increase the gasification of carbon. The temperature in the hearth can be as high as 1500 degree Celsius. The thermal stresses induced by high temperature inside the blast furnace hearth limit the overall blast furnace campaign period. The numerical computation of thermal stresses requires coupling of thermal and mechanical models and solve the associated coupled system.

Blast furnace layout (left) and Blast furnace hearth (right) [Courtesy : ArcelorMittal]

My work in this project has two main objectives. First objective is to numerically compute temperature profile inside the blast furnace hearth walls and compute thermal stresses. This requires application of concepts from the field of continuum mechanics, numerical analysis and scientific computing. However, the increasing demand of these fields for complex processes has put computational resources under considerable pressure. This is particularly true if such computations are required to be performed repeatedly at different parameters. By parameters, we mean variations in the material properties or in the geometry of the domain of interest. In the absence of efficient computational algorithms, the simulations might be time consuming, if not unfeasible, with available computational power. Hence, another objective of my work is to make the computations faster and efficient while maintaining reliability of numerical simulation above minimum acceptable level by using techniques of reduced order modeling for parametric partial differential equations.

Interdisciplinary nature of this project brings additional challenges. While working with the industrial partner, inputs about the processes from experienced professionals in the field of blast furnace operations and experimental measurements are gathered. During the analysis work, mathematical methodologies are devised under supervision of experts in the field of continuum mechanics, applied mathematics and scientific computing. It can be quite challenging sometimes to coordinate and bring everyone at the common platform. This challenge has motivated me to develop skills in various fields in order to be capable of working with experts from different domains. Besides, an industrial doctorate in the ROMSOC project framework has provided me environment to be disciplined as a professional as well as flexible and open minded as a researcher. In addition to the skills obtained within an individual project, the training courses and networking opportunities have contributed significantly by providing overview of concepts in other potentially relevant areas and in turn, have helped to gain alternative perspective about research.

I am confident that the learning process will continue further and I look forward to positively contribute to the organizations involved and at large, to the scientific community.

About the author:

Nirav Vasant Shah is an Early-Stage Researcher (ESR10) within the ROMSOC project. He is a PhD student at the Scuola Internazionale Superiore di Studi Avanzati di Trieste (SISSA) in Trieste (Italy). He is working in collaboration with ArcelorMittal, the world’s leading steel and mining company, in Asturias (Spain) and Technological Institute for Industrial Mathematics (ITMATI) in Santiago de Compostela (Spain) on the mathematical modelling of thermo-mechanical phenomena arising in blast furnace hearth with application of model reduction techniques.”

Halfway through my Ph.D.: the experience of a MSCA Fellow

by Marco Martinolli

My story as a Marie Sklodowska-Curie Action (MSCA) Fellow started on March 2018. It has been 1 year and a half full of experiences, travels and hard work, that made me grow both professionally and personally in many ways. Therefore, being at the midway of my Ph.D. carreer at Politecnico di Milano (Milan, Italy), it is a good moment to look back and make some considerations of my experience as ESR9 within ROMSOC doctoral project.

In this time, I have had the opportunity to work on an industrial project with a mission in which I deeply believe: improve the life conditions of people affected by advanced heart failure. In fact, my research topic consists in the numerical study of membrane-based blood pumps, which are implantable devices that support the cardiac function of damaged hearts. Specifically, I study the fluid-structure interaction that arises in a novel prototype of blood pumps developed at CorWave Inc. (Paris, France). In these pump systems, the propelling action of an oscillating elastic membrane results in the ejection of blood from the left ventricle into the aorta in a pulsatile regime. My goal is to model the membrane-blood dynamics and simulate the pump system in three-dimensions, so that it is possible to predict the hydraulic performance of the pump and optimize it under different operative conditions. As a mathematical engineer, it makes me proud to employ my numerical and computational skills on a real application with potential beneficial effects on the global health system.

Besides my research activity, my job as a MSCA Fellow includes many other tasks and responsibilities, going from the participation to an European training program in advanced mathematics to the creation of teaching material for other students in the field of coupling methods. In particular, in the last months I have worked a lot for the organization of an upcoming scientific event, the Workshop of Industrial Mathematics (WIM2019) in Strobl, Austria. This type of commitment was new for me and allowed me to enhance my abilities in managing resources and coordinating people.

An important mission of my doctoral program is also research dissemination. For this reason, I attended to several important congresses and conferences, like INDAM2018 in Rome, ESAO2018 in Madrid or ICIAM2019 in Valencia, to study cutting-edge technologies and promote my research. But the experience that taught me the most is actually my recent participation to the European Researchers’ Night, a scientific event organized within the Horizon2020 program that is aimed at involving the general public in the activities of the local scientific realities. Participating at the MEETmeTONIGHT exhibition in Milan on September 27th allowed me for the first time to present my research to people with different ages and backgrounds. I had the pleasure to entertain young students from high schools as well as families with kids using science and mathematics and I have learnt how to adjust my communication style depending on the audience.

Finally, this experience has encountered my international aspirations. In fact, in order to work at the partner company, I have lived in Paris – a city that I love – for 6 months and I am about to move there again for another year. This job gave me also the chance to travel around Spain, France, Germany and Austria, expanding my linguistic knowledge and my relational skills in multi-cultural and multi-ethnical communities.

To be honest, the path through my European doctorate has not been always easy and smooth: I have experienced the different approaches of the academic and industrial environments and consequently the different expectations that they have from my work; I have had to manage my time between the progress on my research and the deliverables for the ROMSOC project; and, not to be underestimated, I also struggled to find an accommodation every 6 months in Milan and Paris. However, I strongly believe that all the challenges faced in these first 1.5 years made me a better worker under many points of view and that my future carreer will benefit a lot of this personal and professional growth. Plus, I will bring with me also many good memories.

Experience as a MSCA Fellow after 1.5 years from my recruitment time.

Marco Martinolli is an Early-Stage Researcher (ESR9) within the ROMSOC project. He is a PhD student at Politecnico di Milano (Milan, Italy) and is working in collaboration with CorWave Inc. (Paris, France). His research project deals with numerical simulations for the fluid-structure interaction arising in blood pumps based on wave membranes.

Continuous casting – Modern techniques to solve an old industrial problem

Most of the steel we use every day comes from continuous casting. This is a process which was first introduced in the 1950s, but the first patents are hundred years older. Before continuous casting, the steel was poured into closed molds to form ingots. With the development of continuous caster became possible to produce continuous billet, blooms or slabs. It allowed to have very long pieces with better quality, much faster. The thecnology has improved since its invevention and nowadays these casters can produce up to 10 meters of steel per minute.

Figure 1: Schematic of a continuous caster (credits: Klimes and Stetina)

The industrial partner of this project is the Italian steelmaking company Danieli & C. Officine Meccaniche SpA, they design and run continuous casters. According to the ROMSOC DNA, Danieli came up with the challenge. As these casters go faster and faster, to control the quality of the product becomes harder and harder. The main problems arise in the mold. There the steel begins its solidification and external imperfections can appear due to a wrong heat extraction from the mold which is cooling down the steel.

Figure 2: Vertical section of a mold

Until now, steelmakers used thermocouples to measure the temperature at some points in the mold. This in not sufficient for a proper understanding of the behavior of the mold because what we are really interested in is the heat flux between the mold an the solidifying steel. Then, the objective of my project is to develop a method for estimating the heat flux at the boundary of the mold based on the measurements of the thermocouples made inside the mold domain. Moreover, this has to be done in real time. This problem is what we call an inverse problem. As you may know, it ain’t easy to work with inverse problems which, by the way, are ill-posed. Several techniques are available for solving this kind of problem however they are generally computationally expensive. Here is where Model Order Reduction (MOR) comes into the picture. In my project, we want to exploit MOR to speed up the solution of this inverse heat transfer problem obtaining real time computations of the heat flux. After this brief overview of the subject of my PhD, in the next posts I will go deeper in the methodology we are developing. So keep visiting the ROMSOC blog to stay updated with mine and all other terrific projects.

About the author

Umberto Emil Morelli is an early stage researcher within ROMSOC. He is doing his PhD in reduced order modeling for boundary conditions estimation in heat transfer problems at the Technological Institute for Industrial Mathematics (ITMATI) in collaboration with Danieli & C. Officine Meccaniche SpA and the Scuola Internazionale Superiore di Studi Avanzati (SISSA).

Some lessons in Mathematical Optimization at the University of Erlangen

Between 22nd and 26th of July of this year, (almost) all of the ESRs of the ROMSOC Program took part in the Mandatory Training Course in Mathematical Optimization at University of Erlangen–Nuremberg.

During that time, we have received a thorough introduction to mathematical optimization as such. We learned about the challenges with regard to integer programming. We also practiced solving such problems, both by hand and using state-of-the-art software (Gurobi). Differences between good and bad model formulations were discussed as well.

We also held numerous exchanges about the applications of the mathematical optimization, particularly on the planning problems in industry. We learned how to build optimization models which are applicable to problems occurring in logistics and transport, as well as production, energy systems, telecommunications and many more. We also had a long discussion about the use of optimization in the context of social equity vs. efficiency.

Finally, we went for an excursion into the Franconian mountains, where – despite unfavorable weather – we had a chance to discuss our views and ideas on the subject in an informal setting.

The course was led by professor H. Paul Williams – a person of paramount importance in the world of discrete optimization. We had a chance to benefit from his decade-long experience in the field, starting from the professor’s first attempt to formulate a production model in 1960s, when punched cards were still required to run any kind of computer, through the times when he worked at IBM to develop one of the first optimization software packages (MSPX), and ending in academia, ultimately at LSE.

Below are some pictures from the event:


About the author:

Jonasz Staszek is an early-stage researcher (ESR7) within ROMSOC. He is working with Friedrich-Alexander-Universität Erlangen-Nürnberg in collaboration with DB Cargo Polska on mixed-integer optimization models for an optimal resource allocation and investment for border-crossing transport services in rail traffic.


Let’s Break the Curse of Dimensionality

There I was considering my options after graduation. Different thoughts were rambling across my mind, the idea of getting an advanced degree like Ph.D., and learning a topic at a deeper level intrigued me. I truly enjoyed working on my master thesis and decided to pursue my Ph.D. in the same field.
So, what do I do? Ah, the question that triggers an existential crisis. Sometimes, this comes from innocent, friendly strangers who are trying to spark a conversation. This image from the ‘PhDcomics’ really justifies the length of time it takes you to answer the question “So, what do you do?” Hmm… Let me try to explain it. Just hang in there, you are bound to understand one of the modern problems in the financial mathematics and my humble efforts to solve it.


Packaged retail investment and insurance products (PRIIPs) are at the essence of the retail investment market. PRIIPs offer considerable benefits for retail investors which make up a market in Europe worth up to € 10 trillion. However, the product information provided by financial institutions to investors can be overly complicated and contains confusing legalese. To overcome these shortcomings, the EU has introduced new regulations on PRIIPs (European Parliament Regulation (EU) No 1286/2014). According to this regulation, a PRIIP manufacturer must provide a key information document (KID) for an underlying product that is easy to read and understand. The PRIIPs include the interest rate derivatives such as floating coupon bonds, or interest rates cap and floors.

A KID includes a section about ‘what could an investor get in return?’ for the invested product’ which requires costly numerical simulations of financial instruments. Generally, the interest rate derivatives are evaluated based on the dynamics of the short-rate models. For the simulations of short-rate models, techniques based on discretized convection-diffusion reaction partial differential equations (PDEs) are often superior. It is necessary to note that the choice of a short rate model depends on the type of financial instrument. The Hull-White model is one of the examples of the short rate models:

The model parameters are usually calibrated based on market structures like yield curves. A yield curve shows the interest rates varying with respect to the 20-30 time points known as ‘Tenor points’.

Fig. 1: 10,000 simulated yield curves in 10 years from now

Current idea:

So, in short, the goal is to reduce the computational complexity in the analysis of financial instruments. To avoid this problem, we are working on a parametric model order reduction (MOR) approach based on the proper orthogonal decomposition (POD) method. The method is also known as the Karhunen-Loève decomposition or principal component analysis in statistics. This research work aims to develop a MOR methodology regardless of any short rate model. The technique of model order reduction reduces not only the computational complexity but also the computational time. POD generates an optimally ordered orthonormal basis in the least squares sense for a given set of computational data. Further, the reduced order models are obtained by projecting a high dimensional system onto a low dimensional subspace obtained by truncating the optimal basis. The selection of the computational data set plays an important role, and most prominently obtained by the method of snapshots. In this method, the optimal basis is computed based on the set of state solutions. These state solutions are known as snapshots and are calculated by solving the full model obtained by discretizing the PDE for some parameter values.

The calibration based on simulated yield curves leads to the size of  parameter space. Each parameter vector  has values ranging from   to  , where   is the total number of tenor points and . We compute the snapshot matrix for some selected values of parameters (i.e., snapshots taken at some nodes  ).

The process of computing the optimal basis independent of such a high dimensional parameter space is complex. We aim to develop a new algorithm to obtain the optimal basis based on tensor numerics. We construct a snapshot tensor comprises of the snapshot matrices obtained for certain parameter vectors. Furthermore, we factorize the snapshot tensor using the tensor decomposition techniques to generate the optimal basis. We will then use this optimal subspace for obtaining a parameter-independent reduced order model.

We have implemented the classical POD approach for a numerical example of a puttable bond solved using a Hull-White model. The current research findings indicate that the MOR approach works well for the short rate models and provides two to three times computational speedup (see Fig. 2).

Fig. 2: Relative error plot between the full model and the reduced model

About the author:

Onkar Sandip Jadhav is an early-stage researcher (ESR6) in the ROMSOC project. He is a PhD student at the Technische Universität Berlin (Germany) and is working in collaboration with MathConsult GmbH (Austria). In his research he is working on a parametric model order reduction (MOR) approach aiming to develop a new MOR methodology for high-dimensional convection-diffusion reaction PDEs arising in computational finance with the goal to reduce the computational complexity in the analysis of financial instruments.

Under the Volcano – Heat in Microchips

As ESR-5 in the ROMSOC project I work at the Bergische Universität Wuppertal (Germany) in the picturesque surroundings of das Bergisches Land and at ST Microelectronics at the foot of Mount Etna in Catania (Italy). After having spent my first three months in Wuppertal, I am now living in Catania for six months. In a stark contrast to the colossal size of the ever looming mount Etna, my research takes me into the miniature world of nanoelectronics.

Microchip details are in the order of meters, one millionth of a millimetre. On that scale a human hair looks like a giant tree. Now many phenomena (electro mechanical, thermal, quantum physical) occur inside microchips. In this blog I will focus on thermal aspects. Given the nano-scale of the details it is impossible to perform any measurements inside a working microchip. To understand and try to improve processes inside microchips we convert them into mathematical equations and apply a combination of multirate (MR) and model order reduction (MOR) techniques. Both techniques are well known individually but combining them is novel.

Our goal is to build a simulation of what happens inside a working microchip to help microchip designers create better products.

The Problem: Simulating A Large Coupled System

In the simulation of nanoelectronics there are a great many things to consider. Whilst trying to accurately model the natural phenomena happening inside a microchip one should also safeguard the feasibility of this models numerical solutions. The focus of this project lies on the simulation of large coupled systems with di erent intrinsic time scales.

So let us start at the beginning and look at the origins of these large coupled systems. A microchip is essentially a circuit of many di fferent components, e.g. resistors or transistors, that interact with each other. These interactions are modelled by means of equations and together they form a system of equations. The equations governing the heat of components and the circuit are coupled together, resulting in a coupled system of equations:

Where is the state vector of the coupled system,  the input and the output vector. Furthermore, we consider the system, the input and the output to be linear for now.
As these circuits get more and more intricate these systems continue to grow in size. For modern day microchips these systems can consist of millions of equations. As one could imagine this would become quite unfeasible for straightforward simulation.

MR and MOR

The objective of this project is to combine MR and MOR to drastically improve the simulation speed. MR can exploit the di erent dynamical time scales of subsystems to improve the overall computational efficiency, without losing the global solution accuracy. MOR aims to reduce the size of subsystems and decrease the numerical eff ort for each time step whilst maintaining a certain level of accuracy.


Consider the previously mentioned system of circuit equations and heat equations, combined in one generalised system.

with . Since two phenomena act on separate time scales, (circuit equations evolve very fast compared to the slow heat equations) the system can be naturally split into two systems.

Where denote the fast changing, active components and
 the slow changing, latent components of . By using a multirate method this system can be integrated using a macro time step for the latent subsystem and a micro time step for the active subsystem, as illustrated in the figure below.

So MR divides the system into subsystems with di fferent dynamic scales. This results in subsystems with similar intrinsic characteristics, which makes them suitable for model order reduction.


In MOR the goal is to try to approximate the output  of a large system  of  equations. This approximation is done by creating a smaller system of  equations. This smaller system   should be constructed in such a way that the di fference between its output,  and the original, , is small, while is also much smaller than .

In the context of circuit simulation this smaller system is used to obtain faster simulations with results that approximate the full simulation to a certain degree of accuracy. A general form of MOR can be described by choosing suitable matrices and , with  and biorthonormal, to create the following projection:

Which can be expressed in mathematical form by

The Solution: Combining MR and MOR

To give an intuitive example of how to combine these two techniques consider a multirate system as de fined by

Note that  is considered to be a linear. Now we can apply model order reduction to    by constructing matrices and . This results in the system

For now this simple example serves as an apt illustration. However, as the dimension of the output variable  is still equal to the dimension of the coupling interface we are not quite there yet, but those technicalities will be a story for another blog post.

The Research Project

The main objective of this project is to combine MR and MOR in an industrial setting. This means that instead of the simple linear system given as an example, more complex systems will be considered, e.g. non-linear time-dependent systems or combinations of a variety of diff erent subsystems linked together for which diff erent types of MR and MOR techniques need to be considered.

Besides the combination of the two techniques other goals include: defi ning error estimates linked to the accuracy requirements of the iteration level of the optimisation flow, the preservation of overall properties of the system and stability of the dynamic iteration process.

I hope that this brief introduction to my research topic has given you some insight into my daily activities. To follow the progress of my fellow researchers and me please keep a sharp eye on the ROMSOC website and follow us on Twitter and Facebook!
For now tschüss and arrivederci.
Marcus Bannenberg

The closure problem explained in a daily life application

One of the main purposes of scientific research is to structure the world around us, discovering patterns that transcend and that consequently help to unify applications in various contexts.

In this occasion I would like to show a generic model that has many practical applications, as varied as design of excavation operations, strategies to eliminate military objectives, and selection of projects in companies to obtain greater profits.

A specific case could be the following: frequently most companies have to face equilibrium actions on the realization of certain projects that produce profits and the necessary expenses for carrying out support activities for these projects. Let us suppose that a telecommunications company is evaluating the pros and cons of a project to offer some type of high-speed access service to its customers in the home. The market research shows that this service would produce a good amount of profits, but that they have to be weighed against the cost of carrying out some preliminary projects necessary to make the service possible, for example, increase the fiber optic capacity in the core of the network and the purchase of a new generation of high-speed routers.

What makes these kinds of complicated decisions is that they interact in complex ways: in isolation, the high-speed service gain may not be enough to justify the modernization of the routers; However, once the company has modernized the routers it may be in a position to offer another lucrative project to offer to its customers, and perhaps this new project will tilt the profits in favor of the company. In addition, these interactions can be concatenated: the new project may require other expenses but this may lead to the possibility of other lucrative projects.

In the end, the question would be which projects to devote to and which ones to ignore? It is a simple matter of balancing the costs of realization with the profit opportunities that would be possible.

One way to model these types of decisions is as follows:

There is an underlying set of  projects, where each project    has an associated income   which can be positive or negative. Some projects are prerequisite for the realization of other projects. It may be that a project has more than one prerequisite or possibly none. In addition, a project can be a prerequisite for multiple projects. We will say that a set of  projects is feasible if the prerequisites of any project in   are also in the set. A problem to solve would be to find a feasible set of projects with the highest income.

In terms of graph theory we could define this set with the name of closure: A closure in a directed graph  is a subset of vertices without output arcs, that is, a subset  such that if   and  then .

If we assign a weight  (of arbitrary sign) to each node  of , the problem that concerns us is: find a closure  with the greatest possible weight.

In general, our mission would be to partition the set of vertices into two sets  and , the set closure and its complement. The way in which we will model this problem will be a minimum cut in a certain graph. But in which graph? And how to transform negative weights into positive capabilities for the flow algorithm?

Figure 1 (credits: Wikipedia): Reduction from closure to maximum flow

To solve these questions we begin by defining a new graph  , formed by adding a source   and a sink   to . For every vertex  with  let’s add an arc  with capacity , and in the same way for every vertex  with    let’s add an arc  with capacity . We will define the capabilities of the arcs in  later. We can see that the capacity of the cut  is ,  so the value of the maximum flow is at most  .

We would like to guarantee that if   is a minimum cut in this graph, then  is a closure. The way we will achieve this is by defining the capacity of each arc in  as  . Then we can formally define the network in question as:

Surprisingly, it is possible to demonstrate that there is a bijection between the minimum cuts in this graph and the closures in . So it would only be necessary to know an algorithm to find minimum cuts in a graph to solve the problem that concerns us.

This example is one of the most unexpected applications of the theory of flows that occur in daily life and one of my favorites, which shows us how often we can find mathematical models in the most unexpected places. I hope that this small example have been interesting for all of you. Continue following our blog.

About the author:
José Carlos Gutiérrez Pérez from Cuba is an early-stage researcher (ESR4) within the ROMSOC project. He is working at the University of Bremen (Germany) in collaboration with the industrial partner SagivTech (Israel) on the development of new concepts for data-driven approaches, based on neural networks and deep learning,  for model adaptions under efficiency constraints  with applications in Magnetic particle imaging (MPI) technology.