Breaking the Memory Wall in AI
lecturer photo
Chia-Lin Yang

Nottingham Trent University. Nottingham, England

Deep learning techniques have demonstrated great success in many application domains such as computer vision, speech recognition, and natural language processing . It has been shown that the memory subsystem is the main bottleneck in executing DNN applications This lecture covers recent researches on solutions to tackle the memory wall challenges.
FastInference -- Applying Large Models on Small Devices
lecturer photo
Sebastian Buschjäger

TU Dortmund University. Dortmund, Germany

Machine Learning has become ubiquitous in our everyday life and is often embedded into applications in which users unknowingly interact with ML models. This embedding of ML models into the physical world demands that inference is run locally on small, resource-constraint devices to mitigate communication delays and improve the user experience. Thus, the inference of ML models must be optimized accordingly for the device at hand. In this talk I will survey recent approaches to improve model application on small devices from a theoretical as well as a practical point of view. I start with categorizing existing optimization approaches of pre-trained ML models into three categories each having different impacts on the underlying ML model. Then in the second part I discuss the practical implementation of each category and bring some practical examples.
Unsupervised Representation Learning
lecturer photo
Lukas Pfahler

TU Dortmund University. Dortmund, Germany

We explore different methods for learning from large collections of unlabelled data. We cover approaches like embedding learning and move to recent ideas in self-supervised learning: How can we construct interesting supervised learning tasks from unlabelled data? And how can we use the representations for solving down-stream tasks? We discuss a number of published examples and present a detailed study on scientific information retrieval.
The Power Consumption of Machine Learning: From Supercomputers to Ultra-Low-Power Devices
lecturer photo
Nico Piatkowski

Fraunhofer Institute. Dortmund, Germany

With the new advent of Machine Learning, a vast amount of impressive results have been achieved in the last decade. Some of them can have a significant impact on various aspects of our future daily life. However, a closer look at these state-of-the-art results reveals, that almost all are produced on "supercomputers" with hundreds or even thousands of GPU or TPU accelerators. No matter how appealing these results are, consumer electronics, medical devices or autonomous vehicles will most likely not be equipped with such powerful hardware, mostly due to plain energy constraints. In order to bring artificial intelligence to the masses, Machine Learning must be scaled down to low-end hardware. This lecture consists of three parts: First, we discuss the power consumption and CO2 emission from large-scale multi-GPU setups to small-scale micro-controller systems. Second, we investigate what kind of Machine Learning can actually run on ultra-low-power devices and discuss some recent results. Finally, theoretical aspects of resource constrained Machine Learning are presented. More specifically, we will explore how regularization can be used to encode hard resource constraints into the learning problem.
Graph Neural Networks
lecturer photo
Matthias Fey

TU Dortmund University. Dortmund, Germany

Graph Neural Networks (GNNs) recently emerged to a powerful approach for representation learning on relational data such as social networks, molecular graphs or geometry. Similar to the concepts of convolutional and pooling layers on regular domains, GNNs are able to (hierarchically) extract localized embeddings by passing, transforming, and aggregating information between nodes. In this lecture, I will provide a broad overview of this highly active research field, and will cover relevant topics such as scalability and applications based on GNNs. In a hands-on session, you will implement and train GNNs from scratch using the PyTorch Geometric library [1]. You will learn how to apply GNNs to your own problems and how PyTorch Geometric enables high GPU throughput on highly sparse and irregular data of varying size. [1] https://github.com/rusty1s/pytorch_geometric
Context-aware Optimization of Vehicular Communications for Mobile Crowdsensing
lecturer photo
Benjamin Sliwa

TU Dortmund University. Dortmund, Germany

Vehicular sensor data has been called the "new oil" of the automotive industry. According to a forecast by Intel, the averagely driven car will generate 4000 GB of sensor data per day. While the availability of these huge data amounts will boost decision making processes in future intelligent transportation systems, it confronts the next generation of cellular networks with massive resource demands specifically for machine-type communication. In the talk, we will focus on Machine Learning-enabled opportunistic data transfer methods that use client-based intelligence to achieve relief in the network without extending the actual network infrastructure. Instead, the delay-tolerant nature of vehicular data applications is exploited to schedule data transmissions with respect to the anticipated network quality. For this purpose, we present different probabilistic, predictive, and reinforcement learning-based transmission approaches. Furthermore, we analyze methods for deploying trained prediction models to highly resource-constrained platforms. Finally, we take a look at a possible future evolution stage of predictive vehicular communications within cooperative 6G networks.
High-Dimensional Bayesian Regression Copulas
lecturer photo
Nadja Klein

Humboldt University. Berlin, Germany

We propose a new semi-parametric distributional regression model based on a copula decomposition of the joint distribution of the vector of response values. The copula is high-dimensional and defines an implicit copula process on the covariate space. By construction we obtain marginally calibrated predictive densities. The copula can be estimated using exact or approximate Bayesian inference; the latter of which is scalable to highly parameterized models. As an illustration, we adopt the regression copulas to deep learning architectures in order to conduct a regression-based alternative to likelihood-free inference methods such as ABC or synthetic likelihood, which we demonstrate along a prey-predator model from ecology.
Deep generative modelling and representation learning
lecturer photo
Ramses Sanchez

Bonn-Aachen International Center for Information Technology. Bonn-Aachen, Germany

Generative models aim at simulating how datasets like e.g. word sequences in documents, image collections or complex audio signals like music or speech are generated. This kind of models is attractive because they allow us to incorporate prior knowledge or beliefs about the true data-generating mechanism, or to extract causal relations underlying its variables. They can also help us build useful abstractions (representations) about the true generating process. In this lecture we will discuss two of the most recently popular (deep) generative modelling approaches namely, Generative Adversarial Networks (GAN) and both Variational and Wasserstein Autoencoders (AE). A bit more in detail, we shall (i) briefly revisit probabilistic models parametrized by neural networks and how to train them, (ii) derive the GAN and AE formalism, focusing on the optimal transport point of view, and (iii) discuss the intuition behind representation learning. We will also review some of the state-of-the-art applications of these generative models to different language modelling, time-series analysis and computer vision problems.
Learning meaningful representations for Natural Language Understanding
lecturer photo
Sven Gießelbach

Fraunhofer Institute

In the last years, the self-supervised learning paradigm has drastically improved the performance of natural language understanding models. Following the talk about Unsupervised Representation Learning, we will take a deeper look at how feature representations in the text domain have evolved towards embeddings obtained from self-supervised learning. Starting with the traditional bag-of-words representation, we will go on to highlight distributed embeddings such as Word2Vec and FastText and ultimately contextualized embeddings such as BERT. We will demonstrate the natural language understanding capabilities of the recent state-of-the-art transformer models and and present alternative/modified models that are less resource hungry.