Skip to main content

graphs and networks - Efficient solution for a discrete assignment problem with pairwise costs


Let's consider a simple graph with N vertices, and a corresponding set of N items. The goal of the problem is to assign every item to a vertex on the graph so that sum of a per-edge (that is item-pairwise) cost function over vertices is minimized.


This problem clearly has a discrete search space of N! candidates. Exhaustive search of the optimal solution becomes practically impossible with very low values of N, and I'd expect that there would be methods to put more efficient algorithms at work on this problem.


A brute-force toy attempt on the problem is presented below. Here g is the graph, i are the item values, and w is an extremely simple pairwise cost function:


With[{
g = GridGraph[{3, 3}],
i = {1, 4, 4, 9, 9, 16, 16, 32, 64},
w = Apply[Abs@*Subtract]},


First@TakeSmallestBy[
SetProperty[g,
VertexLabels -> MapThread[Rule, {Range@VertexCount@g, #}]] & /@
Permutations@i,
Function[g,
Total[w[PropertyValue[{g, #}, VertexLabels] & /@ #] & /@
EdgeList@g]], 1]]

enter image description here


Please note that i and w are just examples; i might consist of, say, images, and w might be an earth mover's distance function, which makes neat assignment seen above impossible.



My more clever attempts this far have been based on an assumption this problem could be solved with integer linear programming. Sadly every attempt I've made to rephrase the problem statement in a suitable way for LinearProgramming has been either incomplete (resulting inconsistent edges and vertices), or ended up with an amount of constraints growing so big it's just moving the complexity to a new place.


Just to clarify: I'm not looking for methods to extract the last drop of exhaustive-search performance. Instead, I'm looking for algorithmic improvements in cases where search space consists of easily 1050 permutations, or more.



Answer



I did some experimentation on Metropolis-Hastings algorithm for stochastic minimization of the cost function:


ClearAll@mhGraphPairwiseMinimize;

(* minimize sum of per-edge costs (computed as pairwise vertex item
distances) by assigning items to vertices in a graph, using a
Metropolis-Hastings algorithm. higher alpha makes random walk penalize
non-improving steps more. *)

mhGraphPairwiseMinimize[g_Graph, items_List, distFunc_Function,
alpha_, iter_Integer] :=
Module[{d, vertexToPosition, permutationCost, prependCost,
symmetricRandomPermute, proposedCandidate, acceptanceProbability,
newCandidate, initialCandidate, minimizationStep},
vertexToPosition[v_] := First@FirstPosition[VertexList@g, v];

(* cost function, constructed from graph, uses distance matrix d *)
permutationCost[perm_List] :=
Evaluate[

Total[Quiet@
d[[perm[[vertexToPosition@#1]],
perm[[vertexToPosition@#2]]]] & @@@ EdgeList@g]];

prependCost[perm_List] := {permutationCost@perm, perm};

(* just exchange two item-vertex mappings with each other,
with flat probability. this is both symmetric and transitively ergodic. *)
symmetricRandomPermute[{_, perm_List}] :=
Permute[perm, Cycles[{RandomSample[perm, 2]}]];


proposedCandidate[candidate_List] :=
prependCost@symmetricRandomPermute[candidate];

(* this is the acceptance probability for symmetric permutation distributions *)
acceptanceProbability[candnew_List, candold_List] :=
Min[1, Exp[-alpha (First@candnew - First@candold)/First@candold]];

newCandidate[candidate_List] :=
With[{newcand = proposedCandidate@candidate},

RandomChoice[{#, 1 - #} &@
acceptanceProbability[newcand, candidate] -> {newcand,
candidate}]];

(* distance matrix *)
d = Table[distFunc[a, b], {a, items}, {b, items}];

(* a random permutation *)
initialCandidate = prependCost@RandomSample@Range@VertexCount@g;


(* calculate next (possibly unchanged) random walk value,
and update minimum *)
minimizationStep[{lastmin_List, candidate_List}] :=
With[{newcand = newCandidate@candidate}, {First@
TakeSmallestBy[{lastmin, newcand}, First, 1], newcand}];

(* return the minimum found weight and the corresponding vertex -> item list *)
{#1,
MapThread[Rule, {VertexList@g, items[[#2]]}]} & @@
First@Nest[minimizationStep, {initialCandidate, initialCandidate},

iter]];

Module[{g, weight, sol},
g = GridGraph[{3, 3}];
{weight, sol} = mhGraphPairwiseMinimize[
g, {1, 4, 4, 9, 9, 16, 16, 32, 64}, Abs[#1 - #2] &, 500, 100];
{weight, Graph[g, VertexLabels -> sol]}]


enter image description here




Comments

Popular posts from this blog

functions - Get leading series expansion term?

Given a function f[x] , I would like to have a function leadingSeries that returns just the leading term in the series around x=0 . For example: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x)] x and leadingSeries[(1/x + 2 + (1 - 1/x^3)/4)/(4 + x)] -(1/(16 x^3)) Is there such a function in Mathematica? Or maybe one can implement it efficiently? EDIT I finally went with the following implementation, based on Carl Woll 's answer: lds[ex_,x_]:=( (ex/.x->(x+O[x]^2))/.SeriesData[U_,Z_,L_List,Mi_,Ma_,De_]:>SeriesData[U,Z,{L[[1]]},Mi,Mi+1,De]//Quiet//Normal) The advantage is, that this one also properly works with functions whose leading term is a constant: lds[Exp[x],x] 1 Answer Update 1 Updated to eliminate SeriesData and to not return additional terms Perhaps you could use: leadingSeries[expr_, x_] := Normal[expr /. x->(x+O[x]^2) /. a_List :> Take[a, 1]] Then for your examples: leadingSeries[(1/x + 2)/(4 + 1/x^2 + x), x] leadingSeries[Exp[x], x] leadingSeries[(1/x + 2 + (1 - 1/x...

mathematical optimization - Minimizing using indices, error: Part::pkspec1: The expression cannot be used as a part specification

I want to use Minimize where the variables to minimize are indices pointing into an array. Here a MWE that hopefully shows what my problem is. vars = u@# & /@ Range[3]; cons = Flatten@ { Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; Minimize[{Total@((vec1[[#]] - vec2[[u[#]]])^2 & /@ Range[1, 3]), cons}, vars, Integers] The error I get: Part::pkspec1: The expression u[1] cannot be used as a part specification. >> Answer Ok, it seems that one can get around Mathematica trying to evaluate vec2[[u[1]]] too early by using the function Indexed[vec2,u[1]] . The working MWE would then look like the following: vars = u@# & /@ Range[3]; cons = Flatten@{ Table[(u[j] != #) & /@ vars[[j + 1 ;; -1]], {j, 1, 3 - 1}], 1 vec1 = {1, 2, 3}; vec2 = {1, 2, 3}; NMinimize[ {Total@((vec1[[#]] - Indexed[vec2, u[#]])^2 & /@ R...

What is and isn't a valid variable specification for Manipulate?

I have an expression whose terms have arguments (representing subscripts), like this: myExpr = A[0] + V[1,T] I would like to put it inside a Manipulate to see its value as I move around the parameters. (The goal is eventually to plot it wrt one of the variables inside.) However, Mathematica complains when I set V[1,T] as a manipulated variable: Manipulate[Evaluate[myExpr], {A[0], 0, 1}, {V[1, T], 0, 1}] (*Manipulate::vsform: Manipulate argument {V[1,T],0,1} does not have the correct form for a variable specification. >> *) As a workaround, if I get rid of the symbol T inside the argument, it works fine: Manipulate[ Evaluate[myExpr /. T -> 15], {A[0], 0, 1}, {V[1, 15], 0, 1}] Why this behavior? Can anyone point me to the documentation that says what counts as a valid variable? And is there a way to get Manpiulate to accept an expression with a symbolic argument as a variable? Investigations I've done so far: I tried using variableQ from this answer , but it says V[1...