Search

Searching

  • Often not given an algorithm to solve a problem
    • Only specification of what is a solution
    • We have to search for a solution
  • We can do so by exploring a directed graph that represents the state space of our problem
  • Sometimes the graph is literal
    • Nodes may represent actual locations in space
    • Edges represent the distance between them
  • Sometimes the graph is implicit
    • Nodes represent states
    • Edges represent state transitions

Directed Graphs

A Search Problem

Definition

  • A search problem is defined by:
    • A set of states
  • An initial state
  • Goal states
    • a boolean function which tells whether a given state is a goal state
  • A successor/neighbor function
    • an action which takes us from one state to other states
  • A cost associated with each action

A solution to the problem is a path from the start state to a goal state

  • optionally with the smallest cost

Graph Searching

Pasted image 20250415103900.png
Generic search algorithm:

Graph Search Algorithm

Pasted image 20250415103943.png

Cycle Checking

Implementation of DFS

Properties of DFS

Useful Quantities
Properties

When should we use DFS?

Useful when:

Multiple-Path Pruning

Prune a path to node n that if any previously-found path terminates in n

Implementation

Properties of DFS

Properties

When should we use BFS

Useful when:

Combine the best of BFS and DFS

Properties

Properties

Sometimes there are costs associated with arcs

Properties

Dijkstra’s Algorithm

Variant of LCFS with a kind of multiple-path pruning

Idea: don’t ignore the goal when selecting paths

Example Heuristic Functions

  • If the nodes are points on a Euclidean plane and the cost is the distance, we can use the straight-line distance from n to the closest goal as the value of h(n)
  • If the nodes are locations and cost is time, we can use the distance to a goal divided by the maximum speed
  • If nodes are locations on a grid and cost is distance, we can use the Manhattan distance
    • distance by taking horizontal and vertical moves only

Idea: select the path whose end is closest to a goal according to the heuristic function

Implementation

Properties

Idea: Do a DFS, but add paths to the stack ordered according to h

Implementation

Uses both path cost and heuristic values

Admissibility of A*

If there is a solution, A* always finds an optimal solution - the first to a goal selected - if

Admissible heuristics never overestimate the cost to the goal

Proof

Implementation

Properties

Space and Time Complexities

Optimally Efficient

Among all optimal algorithms that start from the same start node and use the same heuristic, A* expands the fewest nodes

Constructing an Admissible Heuristic

  1. Define a relaxed problem by simplifying or removing constraints on the original problem
  2. Solve the relaxed problem without search
  3. The cost of the optimal solution to the relaxed problem is an admissible heuristic for the original problem

Desirable heuristic properties

Dominating Heuristic

Definition

Given heuristics h1(n) and h2(n), h2(n) dominates h1(n) if

  • n(h2(n)h1(n))
  • n(h2(n)>h1(n))
Theorem

If h2(n) dominates h1(n), A using h2 will never expand more nodes than A using h1

Multiple-Path Pruning & Optimal Solutions

What if a subsequent path to n is shorter than the first path to n?

  • Remove all paths from the frontier that use the longer path
  • Change the initial segment of the paths on the frontier to use the shorter path
  • Ensure this doesn’t happen: make sure that the shortest path to a node is found first
    • lowest-cost-first-search

Multiple-Path Pruning & A*

Suppose path p to n was selected, but there is a shorter path to n; and suppose this shorter path is via path p on the frontier. Suppose path p ends at node n.

cost(n,n)<cost(p)cost(p)h(n)h(n)

You can ensure this doesn’t occur by letting h(n)h(n)cost(n,n)

Monotone Restriction

Heuristic function h satisfies the monotone restriction if h(m)h(n)cost(m,n) for every arc (m,n)

Monotonicity and Admissibility

This is a strengthening of the admissibility criterion

Summary

Pasted image 20250415165350.png

Minimax

For competitive, two-person, zero-sum games

Implementation

Minimax in larger games

Searching all the way to every leaf node is impossibly costly in larger games (e.g. chess)

Higher Level Strategies

The following methods are not full algorithms per se, but can be used to form a strategy for search at a higher level

The definition of searching is symmetric:

sometimes where graph is dynamically constructed, you may not be able to construct the backwards graph

You can search backward from the goal and forward form the start simultaneously

Idea: find a set of islands between s and g

si1i2im1g

There are m smaller problems rather than 1 big problem