Coding01

Coding 点滴

0%

Swift for TensorFlow (4) 之学习计算极大值和极小值

今天我们通过一个简单的一个方程式:

利用 Swift for TensorFlow 学习计算极大值和极小值。

首先,让我们通过笔算算出来极大值和极小值:

极大值

在代码中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import TensorFlow
import TrainingLoop

var x: Float = 0
let η: Float = 0.01
let maxIterations = 100

@differentiable
func f(_ x: Float) -> Float {
return 5 * pow(x, 3) + 2 * pow(x, 2) - 3 * x
}

print("Before optimization, ", terminator: "")
print("x: \(x) and f(x): \(f(x))")

// Optimization loop
for _ in 1...maxIterations {
/// Derivative of `f` w.r.t. `x`.
let 𝛁xF = gradient(at: x) { x -> Float in
return f(x)
}
// Optimization step: update `x` to maximize `f`
x += η * 𝛁xF
}

print("After gradient ascent, ", terminator: "")
print("input: \(x) and output: \(f(x))")

极小值

1
2
3
4
5
6
7
8
9
for _ in 1...maxIterations {
let 𝛁xF = gradient(at: x) { x -> Float in
return f(x)
}
// Optimization step: update `x` to maximize `f`
x.move(along: 𝛁xF.scaled(by: -η))
}
print("After gradient descent, ", terminator: "")
print("input: \(x) and output: \(f(x))")

解释

这个原理挺好理解得,如果一个函数在极大值点周围:

同样的,在极小值点周围:

本文中,主要是使用函数:gradient(at: in:)

1
@inlinable public func gradient<T, R>(at x: T, in f: @differentiable (T) -> Tensor<R>) -> T.TangentVector where T : Differentiable, R : TensorFlowFloatingPoint

Welcome to my other publishing channels