Heuristics for the crossing number

In the introduction to the chapter, we were wondering if there was anything we could do to get us out of this tight spot with crossing number analysis. The truth is that to the best of current knowledge, we can’t do much, at least unless we relax the requirement for a deterministic algorithm to guarantee the correct answer and return an embed­ding with a graph’s minimum crossing number (or minimum rectilinear crossing number).

A common strategy to deal with NP-hard problems, in fact, is to use heuristics. These algorithms, often operating non-deterministically, are able to provide a sub­optimal answer in a reasonable amount of time.

NOTE We have already met randomized algorithms throughout this book, for instance, in chapters 3 and 4, but if you could use a refresher, feel free to take a look at appendix F, which provides a brief summary of randomized algorithms.

1. Did you just say heuristics?

I realize the concept of heuristics might be confusing: Why would we accept an algo­rithm that doesn’t return the correct answer? Sometimes we can’t. There are prob­lems for which we absolutely need to get the right answer, even if we need to wait longer; for instance, if you are running a nuclear power plant or designing a new drug, you don’t want to settle for a sub-optimal solution before trying all the possible (and reasonable) configurations.

At other times, getting to the right answer might not even be possible. An expo­nential algorithm becomes unfeasible for inputs larger than a few dozen elements, and we would not even be able to see the program output an answer for larger inputs.

For easier problems, even if the time to wait would not be longer than human history, it could still be too long. Think about live events or forecasting—there won’t be much use for tomorrow’s weather forecast if it’s delivered in three days!

So, in all these cases, we might be willing to accept a sub-optimal answer, provided we can get it within a reasonable time—but that’s not the whole story.

Another important consideration in favor of heuristics is that despite not being able to guarantee the optimal solution on all inputs, some heuristics are able to return the optimal result in a reasonable time for a subset of the whole problem space, and in some cases they prove to work quite well in practice on real-world data.

How is this possible? It’s because the theory of NP-hardness is about worst-case performance, and for some problems there are a minority of edge cases that turn out to be hard to solve; but for practical applications we are often interested in average-case hardness, aka how difficult on average it is to solve a problem on the instances of the problem that we see in real scenarios.

There are many heuristics that have been developed for graph algorithms, for example, to solve the traveling salesman problem or find a clique on a graph. Not all the heuristics are the same, of course. Usually they are a compromise between perfor­mance and accuracy, and for some of these algorithms it’s possible to prove bounds on their precision, such as proving that the solution they output will be within a cer­tain margin from the optimal solution.

So, what kind of heuristic could we use to find a good (or at least decent) graph embedding?

Now, I’d like you to keep in mind what we said a few lines previously: there can be many different approximate algorithms for each problem, and not all of them are equally good.

We’ll start easy, with a simple heuristic that’s far from ideal. It will serve a purpose, let us understand the problem scenario, and then we can (and will) iterate over it to improve its performance.

Before we start with our first attempt, I encourage you to think about this before moving on, and try to develop your idea. Who knows, you might have a breakthrough and come up with a new solution!

Whenever you are ready, let’s go. We will reuse an algorithm that we saw in previ­ous chapters, specifically a helper method we used in clustering.

Remember k-means? We had this problem of initializing the centroids: come up with an initial choice for k points in a n-dimensional space. It looked easier than it actually was, didn’t it? And yet we saw how important it is to make a good choice for the initial points, so you can get a fast convergence and a good result.

Take a look at section 12.2, and in particular at listings 12.1 and 12.2, to see how we implemented the random initialization of points for the centroids.

Here, with the graph embedding, we have a similar problem. We need to choose a certain number of 2-D points, and the way we chose them will directly determine the result (this time, there isn’t an optimization algorithm running after the choice).

One important difference is that with k-means we had an underlying distribution and choosing the points such that they were uniformly distributed with respect to this distribution was harder and needed particular caution.

For graph embedding, we can draw our points from a finite portion of the plane, typ­ically a rectangle, and those points will determine the number of edge intersections.

Let’s define the problem more formally now:

Given a simple graph G= (V,E) with n=|V| vertices and m=|E| edges, we need to draw n random points from a finite rectangle whose corners are (0,0) and (W,H), so that

  • Each point (x,y) is such that 0≤x<W, 0≤y<H.
  • Edges will be drawn as straight-line segments.
  • Vertices will be drawn as points (or circles) centered at those n points.
  • No two vertices can be assigned to the same center, so given two points (x1, y1) and (x2,y2), either x1#x2 or y1#y2
  • No vertex can lie on the path of an edge.
  • We assume G has no loops. If it did, we could always draw loops without inter­secting any other edge.

Figure 16.2 shows a few examples of valid and invalid choices of vertex centers, given our constraints; in particular, we will have to check the conditions at points 4 and 5 once we have chosen all the points, but we’ll defer the discussion about checking these issues. For now, we can assume we have helper methods for these checks, and if we find that any of these constraints are violated, then we have two strategies to make it right:

  • Correct the vertices position, for example slightly perturbating, at random, those points that coincide or intersect an edge.
  • Discard the solution violating a constraint and start over.

Notice that if we draw vertices as circles (as in figure 16.2) and not just points, we will have to make the requirement on vertices stronger, requiring that the two circles don’t intersect.

Now, as we saw for k-means, when we count on random methods, we can be lucky, but more often than not we won’t be. Figure 16.3 shows a particularly unlucky assignment for the k6 graph. As we know from chapter 15, it’s possible to draw this complete graph in the plane with three intersections between its edges.

A way to raise our chances with k-means was to use the random-restart technique, basically running the algorithm (and in turn the random initialization) several times, and then saving the best solution found across all those runs.

This strategy looks interesting for our problem as well:

  1. Randomly generate the positions of the vertices.
  2. Check that the assignment abides by the constraints for edges and vertices (and discard the current assignments if it doesn’t).
  3. Compute the number of edge intersections.
  4. If it’s the best result so far, keep it; otherwise, discard it.
  5. Restart from 1.

This workflow is shown in figure 16.4 and implemented in listing 16.1. This heuristic is called random sampling.

One last word of caution: we don’t need to assume the graph is connected, but break­ing it down into connected components before applying the heuristic can improve the final result.

In listing 16.1 we abstract two important helper methods: validate and edge- Intersections. Both can be implemented separately and adapted to the actual con­text we decide to operate in.

For both, however, it’s possible to give a generic definition that in turn abstracts over more specific methods: listings 16.2 and 16.3 shows these definitions.

The implementation of the helper methods used in listings 16.2 and 16.3 was dis­cussed in the last chapter, in section 15.4.

Notice that the helper methods checking if a vertex intersects another vertex or an edge are context-dependent, based on how we draw vertices. If they are drawn with circles, we need to make sure that the circles used to draw every pair of vertices have no intersection, and that no edge is drawn over the vertex’s circle (otherwise, it might seem like it’s two edges adjacent to this vertex).

Like the previous method, we abstracted away the actual algorithm checking the edges. This way, depending on the context, you can use one assuming edges are drawn using straight-line segments, or Bezier curves, and so on.

2. Extending to curve-line edges

In the previous section, we added as a constraint that edges were to be drawn as straight-line segments. As a consequence, we were optimizing the embeddings to reduce the intersections of a straight-line drawing, and the total number of intersec­tions could only be as low as the rectilinear crossing number of a graph. As we saw in chapter 15, there are many graphs for which the rectilinear crossing number is larger than the crossing number (which, in turn, is the absolute minimum for the number of intersections in any planar embedding of a graph).

The restriction to straight-line drawings was not explicitly required in the code, which was kept as abstract as possible. However, if we would like to use curves for the edges, we need to apply at least one change to listing 16.1: we also have to decide how each edge is modeled. This can be done in several ways, the simplest being randomly choosing some parameters that determine how each edge is drawn—up to, possibly, running some optimization on these parameters to minimize the intersections.

First, however, we need to decide which parameters we are talking about. To keep things simple, we restrict this to Bezier curves: quadratic Bezier curves can be described with three parameters (the two endpoints, plus a control point), while the cubic version, which (as shown in figure 16.5) is more flexible, needs two control points, for a total of four 2-D points (which make eight scalar parameters).

We have discussed the details of these representations in section 15.4.3. By setting in stone the choice of a subcategory of these curves (the quadratic, symmetric Bezier curves, as shown in figure 16.6), we can already describe how to extend the algorithm in listing 16.1 to deal with the extra parameters.

In particular, to stay true to the choice of a fully randomized algorithm, we could just randomly choose each edge’s control point(s); however, so much freedom in the choice could lead to weird shapes for the edges, making the convergence of the algo­rithm to a good solution slower.

A more feasible alternative could be restricting even more the curves that can be used, for example, using quadratic curves whose control point is at the same distance from both edge’s endpoints. This choice, explained in figure 16.6, allows us to balance flex­ibility and complexity so that we only need to add a single real number to the model of each edge, the distance of the control point from the segment between the endpoints (denoted with d in the figure). It’s also advisable (but not strictly necessary) to restrict the possible values of d so that its absolute value will be in the order of magnitude of w, the distance between the endpoints; negative values of d will cause the convexity of the edge to flip (in figure 16.6, with a negative value for d, the curve would be drawn below the segment uv passing through the vertices).

Source: Rocca Marcello La (2021), Advanced Algorithms and Data Structures, Manning Publications (2021)

Leave a Reply

Your email address will not be published. Required fields are marked *