Skip to main content

Causing: CAUSal INterpretation using Graphs

Project description

Causing: CAUSal INterpretation using Graphs

License: MIT Python 3.7

Causing is a multivariate graphical analysis tool helping you to interpret the causal effects of a given equation system.

Get a nice colored graph and immediately understand the causal effects between the variables.

Input: You simply have to put in a dataset and provide an equation system in form of a python function. The endogenous variable on the left-hand side are assumed being caused by the variables on the right-hand side of the equation. Thus, you provide the causal structure in form of a directed acyclic graph (DAG).

Output: As an output you will get a colored graph of quantified effects acting between the model variables. You are able to immediately interpret mediation chains for every individual observation - even for highly complex nonlinear systems.

Here is a table relating Causing to other approaches:

Causing is Causing is NOT
✅ causal model given ❌ causal search
✅ DAG directed acyclic graph ❌ cyclic, undirected or bidirected graph
✅ latent variables ❌ just observed / manifest variables
✅ individual effects ❌ just average effects
✅ direct, total and mediation effects ❌ just total effects
✅ structural model ❌ reduced model
✅ small and big data ❌ big data requirement
✅ graphical results ❌ just numerical results
✅ XAI explainable AI ❌ black box neural network

The Causing approach is quite flexible. It can be applied to highly latent models with many of the modeled endogenous variables being unobserved. Exogenous variables are assumed to be observed and deterministic. The most severe restriction certainly is that you need to specify the causal model / causal ordering.

Causal Effects

Causing combines total effects and mediation effects in one single graph that is easy to explain.

The total effects of a variable on the final variable are shown in the corresponding nodes of the graph. The total effects are split up over their outgoing edges, yielding the mediation effects shown on the edges. Just education has more than one outgoing edge to be interpreted in this way.

The effects differ from individual to individual. To emphsize this, we talk about individual effects. And the corresponding graph, combining total and mediation effects is called the Imdividual Mediation Effects (IME) graph.

Software

Causing is a free software written in Python 3. Graphs are generated using Graphviz. See dependencies in setup.py. Causing is available under MIT license. See LICENSE.

The software is developed by RealRate, an AI rating agency aiming to re-invent the ratings market by using AI, interpretability and avoiding any conflict of interest. See www.realrate.ai.

When starting python -m causing.examples example after cloning / downloading the Causing repository you will find the results in the output folder. The results are saved in SVG files. The IME files show the individual mediation effects graphs for the respective individual.

See causing/examples for the code generating some examples.

Start your own Model

To start your own model, you have to provide the following information, as done in the example code below:

  • Define all your model variables as SymPy symbols.
  • Note that in Sympy some operators are special, e.g. Max() instead of max().
  • Provide the model equations in topological order, that is, in order of computation.
  • Then the model is specified with:
    • xvars: exogenous variables
    • yvars: endogenous variables in topological order
    • equations: previously defined equations
    • final_var: the final variable of interest used for mediation effects

1. A Simple Example

Assume a model defined by the equation system:

Y1 = X1

Y2 = X2 + 2 * Y12

Y3 = Y1 + Y2.

This gives the following graphs. Some notes are in order to understand them:

  • The data used consist of 200 observations. They are available for the x variables X1 and X2 with mean(X1) = 3 and mean(X2) = 2. Variables Y1 and Y2 are assumed to be latent / unobserved. Y3 is assumed to be manifest / observed. Therefore, 200 observations are available for Y3.

  • To allow for benchmark comparisons, each individual effect is measured with respect to the mean of all observations.

  • Nodes and edges are colored, showing positive (green) and negative (red) effects they have on the final variable Y3.

  • Individual effects are based on the given model. For each individual, however its own exogenous data is put into the given graph function to yield the corresponding endogenous values. The effects are computed at this individual point. Individual effects are shown below just for individual no. 1 out of the 200 observations.

  • Total effects are shown below in the nodes and they are split up over the outgoing edges yielding the Mediation effects shown on the edges. Note however, that just outgoining edges sum up to the node value, incoming edges do not. All effects are effects just on the final variable of interest, assumed here to be Y3.

Individual Mediation Effects (IME)

As you can see in the right-most graph for the individual mediation effects (IME), there is one green path starting at X1 passing through Y1, Y2 and finally ending in Y3. This means that X1 is the main cause for Y3 taking on a value above average with its effect on Y3 being +29.81. However, this positive effect is slightly reduced by X2. In total, accounting for all exogenous and endogenous effects, Y3 is +27.07 above average. You can understand at one glance why Y3 is above average for individual no. 1.

You can find the full source code for this example here.

2. Application to Education and Wages

To dig a bit deeper, here we have a real world example from social sciences. We analyze how the wage earned by young American workers is determined by their educational attainment, family characteristics, and test scores.

This 5 minute introductory video gives a short overview over Causing and includes this real data example: See Causing Introduction Video.

See here for a detailed analys of the Education and Wages example: An Application of Causing: Education and Wages.

3. Application to Insurance Ratings

The Causing approach and its formulas together with an application are given in:

Bartel, Holger (2020), "Causal Analysis - With an Application to Insurance Ratings" DOI: 10.13140/RG.2.2.31524.83848 https://www.researchgate.net/publication/339091133

Note that in this early paper the mediation effects on the final variable of interest are called final effects. Also, while the current Causing version just uses numerically computed effect, that paper uses closed formulas.

The paper proposes simple linear algebra formulas for the causal analysis of equation systems. The effect of one variable on another is the total derivative. It is extended to endogenous system variables. These total effects are identical to the effects used in graph theory and its do-calculus. Further, mediation effects are defined, decomposing the total effect of one variable on a final variable of interest over all its directly caused variables. This allows for an easy but in-depth causal and mediation analysis.

The equation system provided by the user is represented as a structural neural network (SNN). The network's nodes are represented by the model variables and its edge weights are given by the effects. Unlike classical deep neural networks, we follow a sparse and 'small data' approach. This new methodology is applied to financial strength ratings of insurance companies.

Keywords: total derivative, graphical effect, graph theory, do-Calculus, structural neural network, linear Simultaneous Equations Model (SEM), Structural Causal Model (SCM), insurance rating

Award

RealRate's AI software Causing is a winner of PyTorch AI Hackathon.

We are exited being a winner of the PyTorch AI Hackathon 2020 in the Responsible AI category. This is quite an honor given that more than 2,500 teams submitted their projects.

devpost.com/software/realrate-explainable-ai-for-company-ratings.

Contact

Dr. Holger Bartel
RealRate
Cecilienstr. 14, D-12307 Berlin
holger.bartel@realrate.ai
Phone: +49 160 957 90 844
www.realrate.ai

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

causing-2.1.0.tar.gz (15.6 kB view details)

Uploaded Source

Built Distribution

causing-2.1.0-py3-none-any.whl (13.5 kB view details)

Uploaded Python 3

File details

Details for the file causing-2.1.0.tar.gz.

File metadata

  • Download URL: causing-2.1.0.tar.gz
  • Upload date:
  • Size: 15.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.14

File hashes

Hashes for causing-2.1.0.tar.gz
Algorithm Hash digest
SHA256 cf7ac222396e919fd580da376490f6ed40525fee6cd7d626afcccae6a21f4c52
MD5 9d5181eff28cf72f98d67ac61fa50571
BLAKE2b-256 3bb0a1938d408a7c7b329a797a89dd2459909af17b14f166aa0bc1eeeb4c5b04

See more details on using hashes here.

File details

Details for the file causing-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: causing-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 13.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.14

File hashes

Hashes for causing-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e28a02c10930315ebc0560c1c7de2a0246a1ca1bc0e32ad69e3fd61b579d1cda
MD5 509ee92a3dcdb9ba56307db810c5aa26
BLAKE2b-256 31e0d25a5ac1ed25ae8fa663b043db30df4be94e38b79451b78a789055c50063

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page