Site icon R-bloggers

Deep attractors: Where deep learning meets chaos

[This article was first published on RStudio AI Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

“) training_loop(ds_train)
}

“`

After two hundred epochs, overall loss is at 2.67, with the MSE component at 1.8 and FNN at 0.09.

Obtaining the attractor from the test set

We use the test set to inspect the latent code:

# A tibble: 6,242 x 10
      V1    V2         V3        V4        V5         V6        V7        V8       V9       V10
   <dbl> <dbl>      <dbl>     <dbl>     <dbl>      <dbl>     <dbl>     <dbl>    <dbl>     <dbl>
 1 0.439 0.401 -0.000614  -0.0258   -0.00176  -0.0000276  0.000276  0.00677  -0.0239   0.00906 
 2 0.415 0.504  0.0000481 -0.0279   -0.00435  -0.0000970  0.000921  0.00509  -0.0214   0.00921 
 3 0.389 0.619  0.000848  -0.0240   -0.00661  -0.000171   0.00106   0.00454  -0.0150   0.00794 
 4 0.363 0.729  0.00137   -0.0143   -0.00652  -0.000244   0.000523  0.00450  -0.00594  0.00476 
 5 0.335 0.809  0.00128   -0.000450 -0.00338  -0.000307  -0.000561  0.00407   0.00394 -0.000127
 6 0.304 0.828  0.000631   0.0126    0.000889 -0.000351  -0.00167   0.00250   0.0115  -0.00487 
 7 0.274 0.769 -0.000202   0.0195    0.00403  -0.000367  -0.00220  -0.000308  0.0145  -0.00726 
 8 0.246 0.657 -0.000865   0.0196    0.00558  -0.000359  -0.00208  -0.00376   0.0134  -0.00709 
 9 0.224 0.535 -0.00121    0.0162    0.00608  -0.000335  -0.00169  -0.00697   0.0106  -0.00576 
10 0.211 0.434 -0.00129    0.0129    0.00606  -0.000306  -0.00134  -0.00927   0.00820 -0.00447 
# … with 6,232 more rows

As a result of the FNN regularizer, the latent code units should be ordered roughly by decreasing variance, with a sharp drop appearing some place (if the FNN weight has been chosen adequately).

For a knn_weight of 10, we do see a drop after the first two units:

# A tibble: 1 x 10
      V1     V2      V3      V4      V5      V6      V7      V8      V9     V10
   <dbl>  <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>   <dbl>
1 0.0739 0.0582 1.12e-6 3.13e-4 1.43e-5 1.52e-8 1.35e-6 1.86e-4 1.67e-4 4.39e-5

So the model indicates that the Lorenz attractor can be represented in two dimensions. If we nonetheless want to plot the complete (reconstructed) state space of three dimensions, we should reorder the remaining variables by magnitude of variance1. Here, this results in three projections of the set V1, V2 and V4:

(#fig:unnamed-chunk-3)Attractors as predicted from the latent code (test set). The three highest-variance variables were chosen.

Wrapping up (for this time)

At this point, we’ve seen how to reconstruct the Lorenz attractor from data we did not train on (the test set), using an autoencoder regularized by a custom false nearest neighbors loss. It is important to stress that at no point was the network presented with the expected solution (attractor) – training was purely unsupervised.

This is a fascinating result. Of course, thinking practically, the next step is to obtain predictions on heldout data. Given how long this text has become already, we reserve that for a follow-up post. And again of course, we’re thinking about other datasets, especially ones where the true state space is not known beforehand. What about measurement noise? What about datasets that are not completely deterministic2? There is a lot to explore, stay tuned – and as always, thanks for reading!


  1. As per author recommendation (personal communication).↩︎

  2. See [@Kantz] for detailed discussions on using methodology from nonlinear deterministic systems analysis for noisy and/or partly-stochastic data.↩︎

< !-- tabsets --> < !-- code folding --> < !-- dynamically load mathjax for compatibility with self-contained -->

To leave a comment for the author, please follow the link and comment on their blog: RStudio AI Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.