# Symbolic presentations of dynamical systems

$$

$$

# Part 1: Open dynamical systems

We begin this post by talking about open dynamical systems. Open dynamical systems have been studied extensively within applied category theory (see Myers (2022)), and additionally have been a part of AlgebraicJulia for a while, with AlgebraicDynamics, accompanying blog post.

In this post, I sketch out an approach to doing open dynamical systems *symbolically*. This means that the vector fields for open dynamical systems are given by symbolic expressions, rather than arbitrary Julia functions.

This has the following advantages.

- Expressions can be serialized and stored in a database.
- Expressions can be compared for equality modulo the laws of a theory.
- The execution of expressions can be optimized in different ways, or translated into other formats.
- Expressions can have metadata attached.
- Expressions can be input from non-Julia programs.
- Expressions can be displayed in a nice format to the user.

In my previous blog post, I lay out the mathematics behind symbolic representation of functions between spaces. Here I now apply that theory to the specific case of open dynamical systems.

We start out by reviewing open dynamical systems.

In this section we use the word “space” and don’t define what a space is. This is because in a certain sense, we are agnostic as to the exact definition of space. All we really care is that the space supports some notion of derivative, as we are doing continuous-time dynamical systems. In order of increasing generality, a space could mean:

- \mathbb{R}^n for some n
- open subsets of \mathbb{R}^n
- a manifold
- a C^\infty ring
- an object of any tangent bundle category

Pick whichever level of generality you are comfortable with, and mentally replace every time I say “space” with that choice. I’ll generally give examples with \mathbb{R}^n; if you know about the more general types of spaces then you should know enough to generalize what I am saying.

The important thing is that any space X supports a notion of *tangent space at a point x*, denoted by T_x X, and *tangent bundle* denoted by T X. The tangent bundle consists of all of the tangent spaces put together, i.e. as sets

TX = \sum_{x \in X} T_x X

If X = \mathbb{R}^n, then T_x X \cong \mathbb{R}^n and T X \cong \mathbb{R}^n \times \mathbb{R}^n.

A **vector field** on a space X consists of a **section of the tangent bundle** v \colon X \to T X, i.e. a function v such that v(x) \in T_x X for all x \in X. This gives a tangent vector at each point; you can visualize this by searching “vector field” in google images.

Finally, I will behave like a physicist in that I will not overburden myself with saying precisely how differentiable all my functions are. If a function needs to be differentiable, then you can assume that it is.

With these preliminaries out of the way, we can start talking about dynamical systems.

**Definition 1 **A **closed dynamical system** consists of:

- A space X, called the
**state space** - A vector field v \colon X \to T X, with v(x) \in T_x X for all x \in X.

We often refer to v as the “dynamics” of the system. Applied mathematicians and physicists might be more used to seeing such a system written using the equation

\dot{x} = v(x)

However, as a mathematical object, the “data” of this equation is simply the function v.

**Example 1 **Given any Petri net P (where P(S) is the set of species and P(T) is the set of transitions), with fixed positive rates r \in \mathbb{R}^{P(T)}_{>0}, there is a closed dynamical system (X_P, v_P), where

- X_P = \mathbb{R}^{P(S)}_{>0}
- v_P \colon X_P \to T X_P is given by the mass action formula

In the real world, systems are rarely closed; other parts of the world have influence on the system. Classically, we might capture this with the equation

\dot{x} = v(x, u)

where u is some other variable. However, this only captures part of “openness”; this system might also affect other systems, or we may only observe some part of a system. So we also have an “output” equation

y = f(x)

Our system might affect other systems through y.

We now state this more formally.

**Definition 2 **An **open dynamical system** consists of three spaces and two functions. The spaces are:

- A
**state space**X - An
**input space**I - An
**output space**O

The functions are:

- v \colon X \times I \to T X, such that v(x,u) \in T_x X.
- f \colon X \to O

**Example 2 **Given a Petri net P, there is an open system with

- X_P = \mathbb{R}^{P(S)}_{>0}
- I_P = \mathbb{R}^{P(T)}_{>0}
- O_P = \mathbb{R}^{P(S)}_{>0}

and

- v_P \colon X_P \times I_P \to T X_P given by mass action with rate parameters u \in I_P
- f_P \colon X_P \to O_P given by the identity

**Example 3 **An open dynamical system with I=1, O=1 is a closed dynamical system.

# Part 2: Composition of dynamical systems

In this section, we learn about how to compose dynamical systems with a construction from category theory called *lenses*. If you have learned about lenses before from functional programming, you may have some preconceived notion that lens are a way of accessing nested fields in a data structure. However, here we are using lens in a very different way; it might be useful to just think about “lens” as a new word. The two uses of lenses are formally the same, but have a different feel.

Lenses are morphisms in a certain category, so before we can define lenses we have to define the objects in that category.

**Definition 3 **An **arena** consists of a space X and a **bundle** over it, which is a space E and a map p \colon E \to X.^{1}

We think about an arena as a generalized specification of the inputs/outputs of a system. At each output x \in X, there are allowed inputs E_x := p^{-1}(x). We write an arena as \begin{pmatrix}E \\ X\end{pmatrix}. In a special case, the “output” of a system is its state, and the “input” is the direction that you tell it to go in, i.e. a tangent vector.

**Example 4 **Given any space X, there is an arena \begin{pmatrix}TX \\ X\end{pmatrix}, where p \colon TX \to X sends v \in T_x X to x (recall that T X = \sum_{x \in X} T_x X).

**Example 5 **Given any two spaces O and I, there is an arena \begin{pmatrix}O \times I \\ O\end{pmatrix}, where p \colon O \times I \to O is the projection (y, u) \mapsto y. This is known as a **simple arena**, because the inputs don’t depend on the outputs at all; they are the same everywhere. A lens between two simple arenas is known as a **simple lens**.

Lenses are then a way of mapping *outputs forward* and *inputs backward*.

**Definition 4 **Suppose that \begin{pmatrix}B \\ A\end{pmatrix} and \begin{pmatrix}D \\ C\end{pmatrix} are arenas. Then a **lens** between them consists of a function f \colon A \to C, and then for every x \in A, a function f^{\sharp}_x \colon D_{f(x)} \to B_x. We visualize this as

**Example 6 **An open dynamical system is a lens of the form

It takes a bit of unpacking here to see exactly why this is true. The forwards map f \colon X \to O is the same, but the backwards map is in a slightly different form. Namely, if we unpack the definition of a lens, then we have for every x \in X, v_x \colon (O \times I)_{f(x)} \to T_x X. This looks different from our earlier definition of open dynamical system, but we recover that earlier definition when we recall that (O \times I)_{f(x)} = I, because \begin{pmatrix}O \times I \\ O\end{pmatrix} is a trivial bundle. Thus, for every x we have a map v_x \colon I \to T_x X. We can then rearrange the parameters to get v \colon X \times I \to T X, with the additional condition that v(x,u) \in T_x X, which is precisely what we said an open dynamical system was!

There are two ways of composing lenses: composing in parallel and in series. We start with composition in parallel, because that has a more immediate application in terms of open dynamical systems.

**Definition 5 **Given two arenas \begin{pmatrix}B \\ A\end{pmatrix} and \begin{pmatrix}B' \\ A'\end{pmatrix} (with projections p and p'), there is an arena \begin{pmatrix}B \\ A\end{pmatrix} \otimes \begin{pmatrix}B' \\ A'\end{pmatrix} defined by

\begin{pmatrix}B \\ A\end{pmatrix} \otimes \begin{pmatrix}B' \\ A'\end{pmatrix} = \begin{pmatrix}B \times B' \\ A \times A'\end{pmatrix}

where the map p \times p' \colon B \times B' \to A \times A' is defined by (p \times p')(b,b') = (p(b), p(b')).

**Definition 6 **Given two lenses

their **parallel composite** is given by the following lens.

The functions f \times g and f^\sharp \times g^\sharp have types

f \times g \colon A \times A' \to C \times C'

(f^\sharp \times g^\sharp)_{x,x'} \colon D_{f(x)} \times D'_{g(x')} \to B_x \times B'_{x'}

and are defined via

(f \times g)(x,x') = (f(x), g(x))

(f^\sharp \times g^\sharp)_{x,x'}(u,u') = (f^\sharp_x(u), g^\sharp_{x'}(u'))

**Example 7 **We can use parallel composites to take two open dynamical systems and “run them in parallel”. That is, suppose that we have open dynamical systems

Then we can make an open dynamical system

This corresponds to the ODE consisting of two equations:

\dot{x} = v(x,u) \dot{x}' = v'(x', u')

and output y = f(x), y' = f'(x'). This may seem like a “trivial” operation, but the first step to making two open dynamical systems interact is to produce this parallel composite; we then use serial composition to make the systems interact.

Serial composition of lenses, roughly speaking “just composes the backwards and forwards maps”. But let’s spell that out in more detail.

**Definition 7 **Suppose that we have the following setup of arenas and lenses:

We can compose them to form

On the bottom, the function g \circ f \colon A \to E is just the composite of f and g. Then on top, we have

(f^\sharp \circ g^\sharp)_x = F_{g(f(x))} \xrightarrow{g^{\sharp}_{f(x)}} D_{f(x)} \xrightarrow{f^\sharp_x} B_x

**Example 8 **Let A, B, X, X' be spaces, and suppose that we have two open dynamical systems

We can then make a lens

where g \colon A \times B \to 1 sends (a,b) to the single element \ast \in 1, and w_{a,b} \colon 1 \to A \times B \times B \times A sends the single element \ast \in 1 to (a,b,b,a).

When we compose (v,f) and (v', f') in parallel, and then compose in series with (w,g), we get

This is a closed dynamical system, which can be written as a pair of coupled ODEs in the following way:

\dot{x} = v(x, f'(x')) \dot{x}' = v'(x', f(x))

The point is that after composing in parallel, composing with further lenses can couple two systems together.

# Part 3: Symbolic lenses

In the first blog post in this series, we learned how to make a “symbolic category of spaces” via algebraic theories, and the mantra “algebra is dual to geometry”.

We are now going to use this to build symbolic models of dynamical systems. Essentially, this boils down to “do the constructions of the above section starting from a symbolic category of spaces”, but there’s some elaboration that should take place.

Let’s start by taking the simplest algebraic theory: the theory with one type and no operations. The category of finitely presented algebras of this theory is just \mathsf{FinSet}. So then the question become, what are lenses in \mathsf{FinSet}^\mathrm{op}?

For the sake of brevity, we will only cover simple lenses. This will suffice to do dynamical systems with basic state spaces, because T \mathbb{R}^n \cong \mathbb{R}^n \times \mathbb{R}^n.

A simple arena is something of the form \begin{pmatrix}X + Y \\ X\end{pmatrix}, with the injection map X \to X+Y. This is because the product and projection of lenses become coproduct and injection when we dualize.

Then, when we dualize our recipe for a simple lens, we get that a simple lens in \mathsf{FinSet}^\mathrm{op} from \begin{pmatrix}X + Y \\ X\end{pmatrix} to \begin{pmatrix}A + B \\ A\end{pmatrix} consists of a function f \colon A \to X along with a function f^\sharp \colon Y \to B + X.

Such a lens is also known as a **wiring diagram with one box**.

In this wiring diagram:

- X is the set of output ports for the inner box
- Y is the set of input ports for the inner box
- A is the set of output ports for the outer box
- B is the set of input ports for the outer box

We can see that each element of A (there is only one in this case) is assigned an element in X, and each element of Y is assigned an element in X + B.

A model of this algebraic theory is just a set. Suppose that Q is such a model, i.e. a set. Then from a lens in \mathsf{FinSet}^{\mathrm{op}},

where f \colon A \to X, f^\sharp \colon Y \to B + X, we get a lens in \mathsf{Set}

where Q^f \colon Q^X \to Q^A, and Q^{f^\sharp} \colon Q^X \times Q^B \to Q^Y are given by precomposition.

Now suppose that we work in a richer algebraic theory. For example, suppose that we work with the theory of \mathbb{R}-algebras. For simplicity, let’s just consider free \mathbb{R}-algebras. The category of finitely generated free \mathbb{R}-algebras has as objects finite sets, and a morphism f from X to Y consists of a polynomial f_x \in \mathbb{R}[Y] for each x \in X.

A lens (f,f^\sharp) in the dual category to this from \begin{pmatrix}X+Y \\ X\end{pmatrix} to \begin{pmatrix}A+B \\ A\end{pmatrix} consists of a polynomial f_a \in \mathbb{R}[X] for every a \in A, and a polynomial f_y^\sharp \in \mathbb{R}[X+B] for every y \in Y. You can think of f_a as a description of “how to compute” the variable a, given values for all the variables in X, i.e. it’s a dual map X \to A. Likewise f_y^\sharp is a description of “how to compute” the variable y, given values for the variables in X and the variables in B, i.e. it’s a dual map X \times B \to Y.

**Example 9 **The lens for mass-action semantics corresponding to a Petri net can be written in such a form, because the ODEs corresponding to mass-action semantics have polynomial right hand sides.

Just like with wiring diagrams, if we take any model of the theory of \mathbb{R}-algebras, for instance \mathbb{R}, then we can turn any lens in the opposite category of free \mathbb{R}-algebras into an actual geometric lens, by interpreting a map \mathbb{R}[x_1,\ldots,x_n] \to \mathbb{R}[y_1,\ldots,y_m] as a map \mathbb{R}^m \to \mathbb{R}^n to get:

It is no great stretch from here to consider similar constructions where we allow more operations than just multiplication, addition, and scalar multiplication. For instance, we could also allow \sin, \cos, \exp. The most extreme extension of this is to allow any smooth function \mathbb{R}^n \to \mathbb{R} as an operation; models of such a theory are known as C^\infty-rings, and are very nice from a mathematical standpoint (though completely impractical to represent on a computer).

The complications start to come when we consider

- Non-free finitely presented algebras, such as \mathbb{R}[x,y,z]/(x^2+y^2+z^2-1), which allow us to look at spaces that are not just Euclidian space
- Dependent lenses, where the map in the arena is not just a projection. This allows us to look at non-trivial vector bundles.

This delves farther into algebraic geometry than we wish to go in this blog post, but the interested reader is encouraged to look into the subject themselves.

# Take-aways

One thing I want to emphasize in this post is that there were very few “choices” to make. It was some work to make this all fit together and work out the details, but starting from the premise of “I want to do symbolic dynamical systems” and knowing the two slogans

- algebra is dual to geometry
- open dynamical systems are lenses

was sufficient to come up with this. This is one of the things I like about category theory; sometimes it really can feel like discovery not invention, because once you know what you are looking for, there’s often a canonical way of doing things.

I’m excited to be working on implementing this, and I hope to talk about that in a future post where I will talk about Gatlab, a rewrite of the core of Catlab to be more “morphism-first” with respect to computer algebra.

## References

*Categorical Systems Theory*. https://github.com/DavidJaz/DynamicalSystemsBook.

## Footnotes

There is also an additional technical requirement on arenas which is that the bundle is

**locally trivial**. This requirement will not be relevant for the level of detail we work in here, so we will not go into precisely what this means.↩︎