Comment
Search
Duplicate
Try Notion
✅✅
Intro test
Minimum questions
This section presents the questions you need to know the answer to in order to to follow the course.
1. Is it possible to multiply a vector by a vector?
Yes, there are several ways to do it - scalar, vector product.
No, you have to multiply a vector by a matrix or a number.
No, it works differently for vectors.
Yes, we multiply each component of a vector by each other. This is called scalar product.
2. Can the norm of a matrix be zero?
Yes
No
3. What is the derivative of the function f(x)=x2f(x) = x^2?
2x2x
2x+const2x + const
x33\frac{x^3}{3}
x33+const\frac{x^3}{3} + const
There is no way to take the derivative.
4. What is the antiderivative (indefinite integral) of the function f(x)=x2f(x) = x^2?
2x2x
2x+const2x + const
x33\frac{x^3}{3}
x33+const\frac{x^3}{3} + const
It is not possible to calculate the prime form.
5. What is the scalar product of vectors (1,1,1)(1,1,1)  and (2,3,4)(2,3,4)?
(1,2,1,3,1,4)(1,2,1,3,1,4)
(1,2,3,4)(1,2,3,4)
99
(2,3,4)(2,3,4)
Unable to calculate
6. How do you calculate the determinant of a diagonal matrix?
Add up all the diagonal elements
Multiply all the diagonal elements
It is zero
The determinant of such a matrix is equal to the matrix itself.
Substantive course questions.
If you confidently know the answers to most of the questions below, the course will probably be too easy for you.
7. Is the function f(x)=xf(x) = |x| convex?
Yes
No
8. Is the set of symmetric positively defined square matrices convex?
Yes
No
9. What is the subgradient of the function f(x)=sin(x4)+2x4f(x) = \sin(x-4) + 2|x-4| at the point x=4x = 4?
The function is not differentiable at this point, so there is no subgradient.
44
Any number in the interval [2,2][-2, 2]
Any number in the interval [1,1][-1,1]
00
None of the answers are correct.
What is a subgradient? (don't know)
Any number in the interval [1,3][-1, 3]
10. You are training a neural network to classify images. The size of the training sample is 10000, the size of the batch is 100. How many epochs will you make if you make 1000 iterations of stochastic gradient descent?
1
10
100
1000
10000
Epoch? (don't know)
There is no correct answer choice
11. Logistic regression is a method of solving the problem of
Classification
Regression
clustering
12. Let the solution to the linear programming problem exist. Simplex method in the worst case:
Will not converge.
Converges polynomially
Converges exponentially
13. Is the problem of optimizing ResNet neural network weights convex?
Yes
No
There is not enough data in the problem
14. When optimizing with a stochastic gradient method it would be a good idea to :
Decrease the learning rate over time
Increase the learning rate over time
Do not change the learning rate
15. Is the statement, "Adding Tikhonov regularization to a convex function makes the function strongly convex" true?
Yes
No
16. Find the minimal Lipschitz constant of the function f(x)=Axbf(x) = Ax - b, where xx  is a vector of dimension nn, AA is a real matrix of dimension m×nm \times n, bb is a vector of dimension mm.
The function is not Lipschitzian.
2A2 || A||
AA||A^\top A||
A||A||
eAe^{|||A||}
Axb||Ax - b||
17. Is it true that Newton's method will converge for a convex function if you run it from any point in space.
Yes
No
18. Let the computation of the loss function value of your neural network (forward pass) take time tt. Tell me approximately how long it takes to compute the gradients by weights (backward pass)
tt
2t2t
t2t^2
t2\frac{t}{2}
ete^t
00
t-t
NweightstN_{weights} \cdot t
19. Is the statement true: Nesterov momentum and Polyak Momentum equally accelerate the gradient descent method for a convex function with a Lipschitzian gradient in terms of the nature of convergence (to within a constant multiplier)
Yes
No