Paper Club

The «Paper Club» is a small circle of interested students meeting once a month to read and discuss papers from diverse scientific fields. The following table serves as a reference and lists the papers we have read in previous meetings.

Date Authors Title
September 25, 2017 Guy Katz, Clark Barrett, David Dill, Kyle Julian, Mykel Kochenderfer Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
October 27, 2017 Martin Gander, Gerhard Wanner From Euler, Ritz, and Galerkin to Modern Computing
December 4, 2017 Satoshi Nakamoto Bitcoin: A Peer-to-Peer Electronic Cash System
January 3, 2018 Tianqi Chen, Carlos Guestrin XGBoost: A Scalable Tree Boosting System
January 29, 2018 Miguel Alcubierre The warp drive: hyper-fast travel within general relativity
March 7, 2018 Maurizio Falcone, Roberto Ferretti Semi-Lagrangian Approximation Schemes for Linear and Hamilton-Jacobi Equations — Chapter 8: Control and games
April 11, 2018 Dave Bayer, Persi Diaconis Trailing the Dovetail Shuffle to its Lair (I)
May 1, 2018 Dave Bayer, Persi Diaconis Trailing the Dovetail Shuffle to its Lair (II)

PAPP

Project AFEM Plus Plus (PAPP) is aimed at implementing a fast, yet practical FEM library written in C++. It allows

  • fast adaptive n-dimensional refinements
  • an easy way to extend the library (custom refinement schemes/finite elements)
  • an easy to use FFI

Lead developer
Documentation

Xerus

The xerus library is a general purpose library for numerical calculations with higher order tensors, Tensor-Train Decompositions and general Tensor Networks. The focus of development was the simple usability and adaptibility to any setting that requires higher order tensors or decompositions thereof.

  • Modern code and concepts incorporating many features of the C++11 standard.
  • Full python bindings with very similar syntax for easy transitions from and to C++.
  • Calculation with tensors of arbitrary orders using an intuitive Einstein-like notation A(i,j) = B(i,k,l) * C(k,j,l);.
  • Full implementation of the Tensor-Train decompositions (MPS) with all neccessary capabilities (including Algorithms like ALS, ADF and CG).
  • Lazy evaluation of (multiple) tensor contractions featuring heuristics to automatically find efficient contraction orders.
  • Direct integration of the blas and lapack, as high performance linear algebra backends.
  • Fast sparse tensor calculation by usage of the suiteSparse sparse matrix capabilities.
  • Capabilites to handle arbitrary Tensor Networks.

Project Home-Page