Cost function | Coursera Community
Coursera Header
Solved

Cost function


Badge +1
why are we minimizing the square of (prediction - actual) while finding out the value of theta 0 and theta 1 in hypothesis.
# sigma part

icon

Best answer by mars 6 January 2019, 15:46

mars wrote:

We generally square the difference before summation to avoid zero. If we do not square the individual differences, and then sum over all the values, there a chance we may end up with a zero value for cost function.


While the cost function should only be zero when predicted value is equal to label. Squaring ensures this.

View original

2 replies

Userlevel 1
Badge +2
We generally square the difference before summation to avoid zero. If we do not square the individual differences, and then sum over all the values, there a chance we may end up with a zero value for cost function.
Userlevel 1
Badge +2
mars wrote:

We generally square the difference before summation to avoid zero. If we do not square the individual differences, and then sum over all the values, there a chance we may end up with a zero value for cost function.


While the cost function should only be zero when predicted value is equal to label. Squaring ensures this.

Reply

    Cookie policy

    We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

    Accept cookies Cookie settings