A user implemented linear regression using Numpy and linear algebra and received an error.

314    Asked by ranjan_6399 in Data Science , Asked on Jan 15, 2020
Answered by Ranjana Admin

def linear_function(w , x , b):

    return np.dot(w , x) + b

x = np.array([[1, 1,1],[0, 0,0]])

y = np.array([0,1])

w = np.random.uniform(-1,1,(1 , 3))

print(w)

learning_rate = .0001

xT = x.T

yT = y.T

for i in range(30000):

    h_of_x = linear_function(w , xT , 1)

    loss = h_of_x - yT

    if i 000 == 0:

        print(loss , w)

    w = w + np.multiply(-learning_rate , loss)

linear_function(w , x , 1)

 The following error is given below

ValueError Traceback (most recent call last)

in ()

     24 if i 000 == 0:

     25 print(loss , w)

---> 26 w = w + np.multiply(-learning_rate , loss)

     27

     28 linear_function(w , x , 1)

ValueError: operands could not be broadcast together with shapes (1,3) (1,2)

How to fix that?

The array shapes seem to be inconsistent. This could result in issues with broadcasting/dots, especially during gradient descent. In general, using manhattan distance as a loss function is not recommended as it is not a sufficient distance metric.

Below code can fix the problem

# input, augmented

x = np.array([[1, 1, 1], [0, 0, 0]])

x = np.column_stack((np.ones(len(x)), x))

# predictions

y = np.array([[0, 1]])

# weights, augmented with bias

w = np.random.uniform(-1, 1, (1, 4))

learning_rate = .0001

loss_old = np.inf

for i in range(30000):

    h_of_x = w.dot(x.T)

    loss = ((h_of_x - y) ** 2).sum()

    if abs(loss_old - loss) < 1e>

        break

    w = w - learning_rate * (h_of_x - y).dot(x)

    loss_old = loss



Your Answer

Interviews

Parent Categories