links - IAV

everything on Moodle

Hilfsmittel ImageAnalysis_ComputerVision

  • none

Vorlesung

#timestamp 2025-11-06
Lecture 8

logistic regression model (slide 45)

p(y=1)=0.5=11+eθxeθxθx=0

number of hidden layers + size of each layer is mostly guessing + keeping what works


if you make all activations linear (σ(a=Va+c))=> all layers would merge and you would get back to a linear model (slide 81)


softmax-function: creates positive [0,1] numbers out of k data
-> exponential, because the derivative is nice + makes everything positive (but e.g. square would also work)


normally, you test multiple architectures and select the one that works best using the validation set
-> do not extrapolate well!


const functions are important and are difficult to choose

error signal:
derivative of loss to respect to a layer

like feature


δl,i=Lal,i=Lfl,ifl,ial,i=(kLal,kδl+1,kal,kfl,k)fl,kal,k

-> error signal propagates backwards
-> that's why we don't have feedbacks: we would have infinite error loops

Pasted image 20251106162852.png

NOTE: the sigmoid function σ has a not well-behaving gradient, which is why it is not used in deep learning networks normally -> otherwise, it saturates

leaky value

#timestamp 2025-11-13

Problems using MLPs with grid-like structure

kl - number of filters at a particular layer
depth of filter (channel?) is equal to number of channels in the previous layer (?)

-> there will be a question about convolutional layers (slide 39 / week 9)

#timestamp 2025-12-04

softmax returns probabilities ([0,1])

DINO: better genarilzability than supervised models

#timestamp 2025-12-11

![[Lecture13_Generative_Models.pdf]]

#timestamp 2025-12-18

Grad-Cam: only have to compute gradient to see -> works for (nearly) all architectures

Concept Activation Vectors (sensitivity to concept)

mock exams do not cover all material today !!

mostly contextual instead of mathematical (although this year, there might will be a math question)