# Mapper

We will demonstrate a novel application of Mapper to the domain of explanatory machine learning and show how Mapper can be used to understand not just the shape of data, but also understand a machine learning model.

In unsupervised learning, one of the main questions we want to answer is *what type of structure does this set or shape have?* The field of algebraic topology supplies a few tools to help answer this question. We can ask topological questions about the data, such as *how many clusters or connected components are there?* or *are there gaps, holes, or voids in the data?*. These question can be easy to answer if the shape is well defined, but in practice, we instead have data sampled from that shape. If…

The **nerve** is a simplicial complex built from a cover. It is a discrete summarization of the cover that captures the interesting topological features. Additionally, if the cover is sufficiently refined, then the nerve is guaranteed to preserve the topological features. A nerve of a cover is constructed in a very straight forward way: An (n-1)-dimensional simplex is added for each nonempty n-way intersection of elements of the cover.

Once we understand the **tower**, we will show that it plays nice with all of the other definitions we have constructed so far and how it allows us to naturally define a multiscale mapper.

As one of the main tools from the field of Topological Data Analysis, **Mapper** has been shown to be particularly useful for exploring high dimensional point cloud data. This post will walk through the mapper construction from an intuitive perspective and demonstrate how it can be constructed on a toy example.