An Efficient Deep Q-learning Strategy for Sequential Decision-making in Game-playing


Abstract:

This paper presents a deep reinforcement learning model that efficiently learns a sequential decision-making policy to play tic-tac-toe intelligently directly from a high-dimensional video. To produce a stable, sparse neural representation of the states of the tic-tac-toe board, a convolutional pre-trained neural network has been used, followed by a fully-connected sigmoidal network. The assemble behaves as a Q-matrix and produces the ultimate state-decision pairs that control a robotic arm placing physical tokens on the board. The hyperparameters in the whole network are tuned to produce a stable trainable array of elements. An internal clock composed of internal neurons is integrated to give the agent a sense of sequential timing. To solve the max(⊙) function, a novel algorithm is introduced to search for the Q-network values. The algorithm uses a dedicated, sigmoidal net initialized with random parameters. Under backpropagation it iteratively moves to a stable plateau that mimics the all-zeros condition of an initial Q-matrix. Next, the agent uses Bellman's reinforcement principles to learn an optimal policy with a noticeable look-ahead capability. Computer simulations driving a physical robot proved the convergence and effectiveness of the proposed methodology and demonstrated a marked ability in sequential decision-making, taking raw video frames as input.

Año de publicación:

2022

Keywords:

  • Artificial Intelligence
  • game-playing
  • reinforcement learning
  • deep Q-learning
  • sequential decision-making

Fuente:

scopusscopus
googlegoogle

Tipo de documento:

Conference Object

Estado:

Acceso restringido

Áreas de conocimiento:

  • Inteligencia artificial
  • Ciencias de la computación
  • Ciencias de la computación

Áreas temáticas:

  • Programación informática, programas, datos, seguridad