Info

Übungen

#timestamp 2025-09-19

![[NumCSE_week1_practice.cpp]]

#timestamp 2025-10-03

why squared loss?

(-> but sth. oder would also be possible)


least square solution:

(α,β)=argmini=1m|yiαxiβ|2=argmin L(α,β)Lα=i=1m2(yiαxiβ)(xi)=!0(i=1mxi2)α+(i=1mxi)β=i=1mxiyiLβ=i=1m2(yiαxiβ)(1)=!0(i=1mxi)α+nβ=i=1myi

=>

[xi2xixin][αβ]=[i=1nxiyii=1nyi]

=>

AAx=Ab

if matrix is sparse, we can introduce the residual r=Axb

AAx=AbA(Axb)=0Ar=0r=Axbr+Ax=b

=>

[1AA0][rx]=[b0]

#timestamp 2025-10-31

Pasted image 20251031084025.png

#timestamp 2025-11-29

do {
Eigen::Vector4d s = df(x).householderQr().solve(f(x));
x = x - s;
gn_update.push_back(s.norm());
// absolute stopping criteria
} while(gn_update.back() > tol);

Newton-Gauss method, see 8.11)d)

Hiptmair used

s.lpNorm<Eigen::Infinity>();

Infinity-norm stopping is better, because:

He also used

df(x).colPivHouseholderQr().solve(f(x));

The Jacobian in Gauss–Newton can easily become nearly rank-deficient, especially early in the iteration.
Without pivoting, householderQr() may produce an unstable or unusable step. colPivHouseholderQr() safeguards the solve by reordering the columns so that the effective conditioning is improved.