Implement Alpha-beta pruning of Minimax Search Algorithm for a Simple TIC-TAC-TOE game
Improve the decision-making efficiency of the computer player by reducing the number of evaluated nodes in the game tree.
Tic-Tac-Toe game implementation incorporating the Alpha-Beta pruning and the Minimax algorithm with Python Code.
The project involves developing a Tic-Tac-Toe game implementation incorporating the Alpha-Beta pruning with the Minimax algorithm. Using this algorithm, the computer player analyzes the game state, evaluates possible moves, and selects the optimal action based on the anticipated outcomes.
recursively evaluates all possible moves and their potential outcomes, creating a game tree.
Alpha–Beta (𝛼−𝛽) algorithm is actually an improved minimax using a heuristic. It stops evaluating a move when it makes sure that it’s worse than a previously examined move. Such moves need not to be evaluated further.
When added to a simple minimax algorithm, it gives the same output but cuts off certain branches that can’t possibly affect the final decision — dramatically improving the performance
import math
X = 'X'
O = 'O'
EMPTY = None
def print_board(board):
for row in board:
print(' | '.join(cell if cell is not None else ' ' for cell in row))
print('---------')
def check_winner(board, player):
for row in board:
if all(cell == player for cell in row):
return True
for col in range(3):
if all(board[row][col] == player for row in range(3)):
return True
if all(board[i][i] == player for i in range(3)) or all(board[i][2-i] == player for i in range(3)):
return True
return False
def check_draw(board):
return all(cell is not None for row in board for cell in row)
def get_available_moves(board):
return [(row, col) for row in range(3) for col in range(3) if board[row][col] is None]
def evaluate(board):
if check_winner(board, X):
return 1
elif check_winner(board, O):
return -1
elif check_draw(board):
return 0
else:
return None
def minimax(board, depth, alpha, beta, maximizing_player):
eval_result = evaluate(board)
if eval_result is not None:
return eval_result
if maximizing_player:
max_eval = -math.inf
for move in get_available_moves(board):
row, col = move
board[row][col] = X
eval_score = minimax(board, depth + 1, alpha, beta, False)
board[row][col] = None
max_eval = max(max_eval, eval_score)
alpha = max(alpha, eval_score)
if beta <= alpha:
break
return max_eval
else:
min_eval = math.inf
for move in get_available_moves(board):
row, col = move
board[row][col] = O
eval_score = minimax(board, depth + 1, alpha, beta, True)
board[row][col] = None
min_eval = min(min_eval, eval_score)
beta = min(beta, eval_score)
if beta <= alpha:
break
return min_eval
def find_best_move(board):
best_move = None
best_eval = -math.inf
alpha = -math.inf
beta = math.inf
for move in get_available_moves(board):
row, col = move
board[row][col] = X
eval_score = minimax(board, 0, alpha, beta, False)
board[row][col] = None
if eval_score > best_eval:
best_eval = eval_score
best_move = move
return best_move
def play_game():
board = [[EMPTY, EMPTY, EMPTY],
[EMPTY, EMPTY, EMPTY],
[EMPTY, EMPTY, EMPTY]]
current_player = X
while True:
print_board(board)
if current_player == X:
row, col = find_best_move(board)
print("X's move:")
else:
row, col = map(int, input("O's move (row col): ").split())
if board[row][col] is not None:
print("Invalid move! Try again.")
continue
board[row][col] = current_player
if check_winner(board, current_player):
print_board(board)
print(current_player, "wins!")
break
elif check_draw(board):
print_board(board)
print("Draw!")
break
current_player = O if current_player == X else X
play_game()
![Screen Shot 1946-01-27 at 20 58 26](https://private-user-images.githubusercontent.com/144928933/323066326-f6b108e7-1c84-4129-be8f-7511d6ab68f9.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjIxNzAxNjcsIm5iZiI6MTcyMjE2OTg2NywicGF0aCI6Ii8xNDQ5Mjg5MzMvMzIzMDY2MzI2LWY2YjEwOGU3LTFjODQtNDEyOS1iZThmLTc1MTFkNmFiNjhmOS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNzI4JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDcyOFQxMjMxMDdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0wMWRiYzZjNDVkYzRhYzBjZmZlZjlhNjE1MDdjNDRlYmJkZmIyNzJiNWQ0MDZlM2Y0MGYwZTQ3NjlkOTgzYTc0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.KyWX5YT2aK5ixeB4wu5CHODCWvNgayQjGdlyajpqAlQ)
![Screen Shot 1946-01-27 at 20 58 49](https://private-user-images.githubusercontent.com/144928933/323066435-40985ea2-8ec4-4305-912c-3ba4c8b5ab41.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjIxNzAxNjcsIm5iZiI6MTcyMjE2OTg2NywicGF0aCI6Ii8xNDQ5Mjg5MzMvMzIzMDY2NDM1LTQwOTg1ZWEyLThlYzQtNDMwNS05MTJjLTNiYTRjOGI1YWI0MS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwNzI4JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDcyOFQxMjMxMDdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0zZTc2Njk0Yzk1MWRiMzYyYzg4YzM1NjI5OGQzZjdiOGIxZTI5MTY5ZDIyNDUwNjY1YTQxNDFjYTBkNGMxZmY2JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.yujkSwGUijLB84LiqYQEifpl-4Z1Mc0idtdwDIZTTok)
Implement Alpha-beta pruning of Minimax Search Algorithm for a Simple TIC-TAC-TOE game is successfully done