The Summer School will be offered as a virtual event due to the spread of the Coronavirus (SARS-CoV-2). Lectures will be available on-demand on YouTube during the week of the Summer School. Each lecture will be accompanied by a dedicated Q&A session on Zoom. The schedule for lectures and Q&A session will be announced on this webpage soon. The following lectures are going to be available:
Opening
lecturer photo
Katharina Morik

Artificial Intelligence Unit, TU Dortmund University, Dortmund, Germany


The warm welcome to the summer school comes with an introduction of the collaborative research center SFB 876 and the competence center ML2R which organise it. What are the hot topics of resource-aware machine learning? Why and how should we save energy and communication when learning or applying the learned models? We conclude with practical hints.
Learning meaningful representations for Natural Language Understanding
lecturer photo
Sven Gießelbach

Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, St. Augustin, Germany


In the last years, the self-supervised learning paradigm has drastically improved the performance of natural language understanding models. Following the talk about Unsupervised Representation Learning, we will take a deeper look at how feature representations in the text domain have evolved towards embeddings obtained from self-supervised learning. Starting with the traditional bag-of-words representation, we will go on to highlight distributed embeddings such as Word2Vec and FastText and ultimately contextualized embeddings such as BERT. We will demonstrate the natural language understanding capabilities of the recent state-of-the-art transformer models and and present alternative/modified models that are less resource hungry.
Panel Discussion: Publish or Perish
lecturer photo
Katharina Morik

Artificial Intelligence Unit, TU Dortmund University, Dortmund, Germany


Publications play a key role in the process of making science. What is this process of science? How is it developed by articles? What is a good paper? How is the paper assessment done? If the paper if finally published it still needs to be read and cited. Publications are also important for the career of a scientist. Ranking publication venues and citations — is this what we aim at? After an introductory talk we shall exchange our ideas about why we work on machine learning, what we are aiming at, and which role we want publications to play.
High-Dimensional Bayesian Regression Copulas
lecturer photo
Nadja Klein

Humboldt University, Berlin, Germany


We propose a new semi-parametric distributional regression model based on a copula decomposition of the joint distribution of the vector of response values. The copula is high-dimensional and defines an implicit copula process on the covariate space. By construction we obtain marginally calibrated predictive densities. The copula can be estimated using exact or approximate Bayesian inference; the latter of which is scalable to highly parameterized models. As an illustration, we adopt the regression copulas to deep learning architectures in order to conduct a regression-based alternative to likelihood-free inference methods such as ABC or synthetic likelihood, which we demonstrate along a prey-predator model from ecology.
The Power Consumption of Machine Learning: From Supercomputers to Ultra-Low-Power Devices
lecturer photo
Nico Piatkowski

Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, St. Augustin, Germany


With the new advent of Machine Learning, a vast amount of impressive results have been achieved in the last decade. Some of them can have a significant impact on various aspects of our future daily life. However, a closer look at these state-of-the-art results reveals, that almost all are produced on "supercomputers" with hundreds or even thousands of GPU or TPU accelerators. No matter how appealing these results are, consumer electronics, medical devices or autonomous vehicles will most likely not be equipped with such powerful hardware, mostly due to plain energy constraints. In order to bring artificial intelligence to the masses, Machine Learning must be scaled down to low-end hardware. This lecture consists of three parts: First, we discuss the power consumption and CO2 emission from large-scale multi-GPU setups to small-scale micro-controller systems. Second, we investigate what kind of Machine Learning can actually run on ultra-low-power devices and discuss some recent results. Finally, theoretical aspects of resource constrained Machine Learning are presented. More specifically, we will explore how regularization can be used to encode hard resource constraints into the learning problem.
The potential of plasmon assisted microscopy of nano-objects (PAMONO) technique for diagnostic and treatment of COVID19 and other viral pathologies
lecturer photo
Victoria Shpacovitch

Leibniz Institut für Analytische Wissenschaften, Dortmund

lecturer photo
Konstantin Wüstefeld

TU Dortmund University


In the lecture we would like to introduce PAMONO-sensor – an analytical instrument, which enables sizing and quantification of individual viruses and virus-like particles (VLPs). This sensor also provides affirmed specificity in the detection of target viruses and VLPs. The PAMONO-sensor is built up utilizing the principles of plasmon-assisted microscopy. The latter issue supports its relative versatility granting us the invaluable opportunity to combine different analytical functions in one instrument. While the sensor provides a stream of images containing intensity changes barely visible to the human eye it is impracticable to analyse these manually. Instead, a deep learning based analysis pipeline utilizes a combination of spatial and temporal approaches in order to detect signals of interest. We will give an overview of the challenges that come with the PAMONO-sensor and show how they can be tackled.
Context-aware Optimization of Vehicular Communications for Mobile Crowdsensing
lecturer photo
Benjamin Sliwa

Communication Networks Institute, TU Dortmund University, Dortmund, Germany


Vehicular sensor data has been called the "new oil" of the automotive industry. According to a forecast by Intel, the averagely driven car will generate 4000 GB of sensor data per day. While the availability of these huge data amounts will boost decision making processes in future intelligent transportation systems, it confronts the next generation of cellular networks with massive resource demands specifically for machine-type communication. In the talk, we will focus on Machine Learning-enabled opportunistic data transfer methods that use client-based intelligence to achieve relief in the network without extending the actual network infrastructure. Instead, the delay-tolerant nature of vehicular data applications is exploited to schedule data transmissions with respect to the anticipated network quality. For this purpose, we present different probabilistic, predictive, and reinforcement learning-based transmission approaches. Furthermore, we analyze methods for deploying trained prediction models to highly resource-constrained platforms. Finally, we take a look at a possible future evolution stage of predictive vehicular communications within cooperative 6G networks.
Unsupervised Representation Learning
lecturer photo
Lukas Pfahler

Artificial Intelligence Unit, TU Dortmund University, Dortmund, Germany


We explore different methods for learning from large collections of unlabelled data. We cover approaches like embedding learning and move to recent ideas in self-supervised learning: How can we construct interesting supervised learning tasks from unlabelled data? And how can we use the representations for solving down-stream tasks? We discuss a number of published examples and present a detailed study on scientific information retrieval.
Graph Neural Networks
lecturer photo
Matthias Fey

Computer Graphics, TU Dortmund University, Dortmund, Germany


Repository: PyTorch Geometric library [1]

Graph Neural Networks (GNNs) recently emerged to a powerful approach for representation learning on relational data such as social networks, molecular graphs or geometry. Similar to the concepts of convolutional and pooling layers on regular domains, GNNs are able to (hierarchically) extract localized embeddings by passing, transforming, and aggregating information between nodes. In this lecture, I will provide a broad overview of this highly active research field, and will cover relevant topics such as scalability and applications based on GNNs. In a hands-on session, you will implement and train GNNs from scratch using the PyTorch Geometric library [1]. You will learn how to apply GNNs to your own problems and how PyTorch Geometric enables high GPU throughput on highly sparse and irregular data of varying size.
Hands-On Session: Graph Neural Networks
lecturer photo
Matthias Fey

Computer Graphics, TU Dortmund University, Dortmund, Germany


Repository: PyTorch Geometric library [1]

More: Session Material

In this hands-on session, you will learn how to implement and train Graph Neural Networks (GNNs) using the PyTorch Geometric library [1], built upon PyTorch. We will first have a closer look on how graphs can be represented in tensor-processing libraries, before we dive deeper into how GNNs are implemented internally and how to train them. You will then learn how to apply GNNs to reach state-of-the-art performance on numerous tasks such as node and graph classification as well as link prediction. The hands-on session will be provided via Google Colab. In order to prepare your self and to participate, visit the link for the session material on the left
Interdisciplinary approaches for dealing with the constraints in cyber-physical system design
lecturer photo
Peter Marwedel

TU Dortmund, Cyber Physical Systems Unit


It can be expected that most of future applications of information and communication technology (ICT) will involve cyber-physical systems (CPS) and the Internet of Things (IoT). In this talk, we will start with an overview of application areas of CPS and IoT, demonstrating their wide variety. This will be followed by a presentation of challenges, including a necessary link to machine learning and AI. Also, CPS design requires the cooperation between an almost endless number of disciplines. In the final part, we will provide examples which demonstrate that this cooperation is feasible and rewarding. In fact, this cooperation leads to results that were not feasible within the constraints of single disciplines.
Deep generative modelling and representation learning
lecturer photo
Ramses Sanchez

Bonn-Aachen International Center for Information Technology, Bonn-Aachen, Germany


Youtube: Lecture Part1

Youtube: Lecture Part2

Generative models aim at simulating how datasets like e.g. word sequences in documents, image collections or complex audio signals like music or speech are generated. This kind of models is attractive because they allow us to incorporate prior knowledge or beliefs about the true data-generating mechanism, or to extract causal relations underlying its variables. They can also help us build useful abstractions (representations) about the true generating process. In this lecture we will discuss two of the most recently popular (deep) generative modelling approaches namely, Generative Adversarial Networks (GAN) and both Variational and Wasserstein Autoencoders (AE). A bit more in detail, we shall (i) briefly revisit probabilistic models parametrized by neural networks and how to train them, (ii) derive the GAN and AE formalism, focusing on the optimal transport point of view, and (iii) discuss the intuition behind representation learning. We will also review some of the state-of-the-art applications of these generative models to different language modelling, time-series analysis and computer vision problems.
FastInference -- Applying Large Models on Small Devices
lecturer photo
Sebastian Buschjäger

Artificial Intelligence Unit, TU Dortmund University, Dortmund, Germany


Machine Learning has become ubiquitous in our everyday life and is often embedded into applications in which users unknowingly interact with ML models. This embedding of ML models into the physical world demands that inference is run locally on small, resource-constraint devices to mitigate communication delays and improve the user experience. Thus, the inference of ML models must be optimized accordingly for the device at hand. In this talk I will survey recent approaches to improve model application on small devices from a theoretical as well as a practical point of view. I start with categorizing existing optimization approaches of pre-trained ML models into three categories each having different impacts on the underlying ML model. Then in the second part I discuss the practical implementation of each category and bring some practical examples.

Cancelled lectures

Breaking the Memory Wall in AI
lecturer photo
Chia-Lin Yang

National Taiwan University (NTU), Taipeh, Taiwan

Deep learning techniques have demonstrated great success in many application domains such as computer vision, speech recognition, and natural language processing . It has been shown that the memory subsystem is the main bottleneck in executing DNN applications This lecture covers recent researches on solutions to tackle the memory wall challenges.