back to module overview

CS5073-KP12 - Network Architectures and Deep Learning (NetzADeepL)


1 Semester

Turnus of offer:

each winter semester

Credit Points:


Course of study, specific field and term:
  • Certificate in Artificial Intelligence (compulsory), Artificial Intelligence, 1st semester
Classes and lectures:

Moodle Page of the Course

  • 220 Hours private studies
  • 105 Hours in-classroom work
  • 20 Hours work on project
  • 15 Hours exam preparation
Contents of teaching:
  • Probabilistic Differential Programming:
  • The lecture provides insights into the necessary foundations for understanding probabilistic and differential programming from the AI perspective. After a short recap oft the required terminology in machine learning (ML) and mathematics, the course is going to give a survey on the main properties of recent network structures that have been developed in recent research and applications in the context of deep learning. Then the course turns attention to further important topics that are going to have impact on future research and (industrial) developments in the ML area: probabilistic (world) models as well as probabilistic and differential programming. In particular, the course is going to argue that the networks architectures introduced in the beginning can be explained by and formally embedded in both paradigms. Additionally, the course aims at showing that probabilistic (world-) models act as an adequate mediator between the symbolic-qualitative approach of modelling and reasoning and the quantitative approach of classical ML approaches. The contents of the course are the following:
  • Basic terminology of ML
  • Gradients and the differential approach
  • (Deep) network architectures: convolutional networks, residual networks, LSTM and control of context dependency, (spatial) transformation networks, probabibilistic networks, generative adversarial networks
  • Probabilistic programming
  • Probabilistic models as mediator between ML and logic
  • Deep network architecture with symbolic reasoning and ontological constraints
  • Probabilistic and logistic circuits
  • Deep Learning Lab:
  • Working with different network models: configuration, differential training, application
  • All current techniques taught in the module and described above can be named and defined by the students and their functional proofs can be explained on the basis of applications.
  • Students are able to identify advantages and disadvantages of planning and acting approaches.
  • Students are able to identify ethical aspects and assess their implications.
Grading through:
  • Oral examination
Responsible for this module:
  • Prof. Dr. rer. nat. habil. Ralf Möller
  • Institute of Information Systems
  • Institute of Medical Informatics
  • Institute for Neuro- and Bioinformatics
  • Institute for Signal Processing
  • Institute of Software Technology and Programming Languages
  • Institute of Computer Engineering
  • Prof. Dr. rer. nat. habil. Ralf Möller
  • MitarbeiterInnen des Instituts
  • I. Goodfellow, Y. Bengio, and A. Courville: Deep Learning - MIT Press, 2016
  • Aston Zhang, Zachary C. Lipton, Mu Li, and Alexander J. Smola: Dive into Deep Learning - 2019. Book manuscript made available online
  • Michael Nielsen: Neural Networks and Deep Learning - Online Book available at
  • S. J. Russell and P. Norvig: Artificial Intelligence A Modern Approach - Prentice Hall, 2012
  • K. Murphy: Machine Learning: A Probabilistic Perspective. Adaptive Computation and Machine Learning series - MIT Press, 2012
  • Current conference and journal articles on the topics of the event will be announced at the beginning of the event in the case of the seminar and at the discussion of the topic in the case of the lecture.
  • offered only in English

Prerequisites for attending the module:
- None

Prerequisites for the exam:
- None

Last updated:


back to module overview