参考自叶强《强化学习》第三讲,方格世界—— 使用 动态规划 求解随机策略
动态规划的使用条件时MDP已知,在简单游戏中,这个条件时显然成立的 使用Value iteration的方法求解每个状态的价值函数,迭代收敛之后,对应最优策略生成。
注意:动态规划和强化学习都用的价值函数,区别在于
从方格状态走到终止状态(灰色标记)
值得注意的是,知乎原版的注释是错误的,采用的是同步更新
有三个trick可以加快运算速度(对于大规模问题)
# 状态集合
states = [i for i in range(16)]
# 价值集合
values = [0 for _ in range(16)]
# 动作集:
actions = ["n", "e", "s", "w"]
# 动作字典:
ds_actions = {"n": -4, "e": 1, "s": 4, "w": -1}
# 衰减率
gamma = 1.00
# 定义MDP
def nextState(s, a):
next_state = s
if (s%4 == 0 and a == "w") or (s<4 and a == "n") or \
((s+1)%4 == 0 and a == "e") or (s > 11 and a == "s"):
pass
else:
ds = ds_actions[a]
next_state = s + ds
return next_state
# 定义奖励
def rewardOf(s):
return 0 if s in [0, 15] else -1
# 判断是否结束
def isTerminateState(s):
return s in [0, 15]
# 获取所有可能的next state 集合
def getSuccessors(s):
successors = []
if isTerminateState(s):
return successors
for a in actions:
next_state = nextState(s, a)
# if s != next_state:
successors.append(next_state)
return successors
# 更新当前位置的价值函数
def updateValue(s):
sucessors = getSuccessors(s)
newValue = 0 # values[s]
num = 4 # len(successors)
reward = rewardOf(s)
for next_state in sucessors:
newValue += 1.00 / num * (reward + gamma * values[next_state])
return newValue
# 打印所有状态对应价值函数
def printValue(v):
for i in range(16):
print('{0:>6.2f}'.format(v[i]), end=" ")
if (i + 1) % 4 == 0:
print("")
print()
# 一次迭代
# 这里采用的是同步更新,不是异步更新。创建了newvalues数组,遍历过states后,统一更新global values
def performOneIteration():
newValues = [0 for _ in range(16)]
for s in states:
newValues[s] = updateValue(s)
global values
values = newValues
printValue(values)
# 主函数
def main():
max_iterate_times = 160
cur_iterate_times = 0
while cur_iterate_times <= max_iterate_times:
print("Iterate No.{0}".format(cur_iterate_times))
performOneIteration()
cur_iterate_times += 1
printValue(values)
if __name__ == '__main__':
main()
Iterate No.0
0.00 -1.00 -1.00 -1.00
-1.00 -1.00 -1.00 -1.00
-1.00 -1.00 -1.00 -1.00
-1.00 -1.00 -1.00 0.00
Iterate No.1
0.00 -1.75 -2.00 -2.00
-1.75 -2.00 -2.00 -2.00
-2.00 -2.00 -2.00 -1.75
-2.00 -2.00 -1.75 0.00
.
.
.
Iterate No.158
0.00 -14.00 -20.00 -22.00
-14.00 -18.00 -20.00 -20.00
-20.00 -20.00 -18.00 -14.00
-22.00 -20.00 -14.00 0.00
Iterate No.159
0.00 -14.00 -20.00 -22.00
-14.00 -18.00 -20.00 -20.00
-20.00 -20.00 -18.00 -14.00
-22.00 -20.00 -14.00 0.00
Iterate No.160
0.00 -14.00 -20.00 -22.00
-14.00 -18.00 -20.00 -20.00
-20.00 -20.00 -18.00 -14.00
-22.00 -20.00 -14.00 0.00
0.00 -14.00 -20.00 -22.00
-14.00 -18.00 -20.00 -20.00
-20.00 -20.00 -18.00 -14.00
-22.00 -20.00 -14.00 0.00