43
loading...
This website collects cookies to deliver better user experience
0.48 / (100 / (100 + 150)) = 1.2
and weighting the female responses by 0.52 / (150 / (100 + 150) = 0.867
.Note: This is where we get into the more mathematical part of this blog post 🤓. Don't care about this part? No worries! Simply skip to the next section!
numpy
library:import numpy as np
def raking_inverse(x):
return np.exp(x)
def d_raking_inverse(x):
return np.exp(x)
def graking(X, T, max_steps=500, tolerance=1e-6):
# Based on algo in (Deville et al., 1992) explained in detail on page 37 in
# https://orca.cf.ac.uk/109727/1/2018daviesgpphd.pdf
# Initialize variables - Step 1
n, m = X.shape
L = np.zeros(m) # Lagrange multipliers (lambda)
w = np.ones(n) # Our weights (will get progressively updated)
H = np.eye(n)
success = False
for step in range(max_steps):
L += np.dot(np.linalg.pinv(np.dot(np.dot(X.T, H), X)), (T - np.dot(X.T, w))) # Step 2.1
w = raking_inverse(np.dot(X, L)) # Step 2.2
H = np.diag(d_raking_inverse(np.dot(X, L))) # Step 2.3
# Termination condition:
loss = np.max(np.abs(np.dot(X.T, w) - T) / T)
if loss < tolerance:
success = True
break
if not success: raise Exception("Did not converge")
return w
numpy
which we found in Numo. Numo is an awesome library for vector and matrix operations, and its linalg
sub-library was perfect for us as we needed to compute a matrix pseudo-inverse. This allowed us to translate the code to Ruby almost line by line:require "numo/narray"
require "numo/linalg"
def raking_inverse(x)
Numo::NMath.exp(x)
end
def d_raking_inverse(x)
Numo::NMath.exp(x)
end
def graking(X, T, max_steps: 500, tolerance: 1e-6)
# Based on algo in (Deville et al., 1992) explained in detail on page 37 in
# https://orca.cf.ac.uk/109727/1/2018daviesgpphd.pdf
# Initialize variables - Step 1
n, m = X.shape
L = Numo::DFloat.zeros(m)
w = Numo::DFloat.ones(n)
H_diag = Numo::DFloat.ones(n)
success = false
max_steps.times do
L += Numo::Linalg.pinv((X.transpose * H_diag).dot(X)).dot(T - X.transpose.dot(w)) # Step 2.1
w = raking_inverse(X.dot(L)) # Step 2.2
H_diag = d_raking_inverse(X.dot(L)) # Step 2.3
# Termination condition:
loss = ((T - X.transpose.dot(w)).abs / T).max
if loss < tolerance
success = true
break
end
end
raise StandardError, "Did not converged" unless success
w
end
h_matrix_diagonal
since it only contains values on the diagonal. As a result, the step of taking the product
XTH
can be rewritten as X.Transpose * h_matrix_diagonal
, making use of Numo's implicit broadcasting.NaN
) and by allowing to pass as input an initial value for the vector lambdas
if we believe to have an initialisation value better than the default.