Research and Advances
Computing Applications Research highlights

Multi-Itinerary Optimization as Cloud Service

Posted
Read the related Technical Perspective
people holding oversized puzzle pieces, illustration

Abstract

In this paper, we describe multi-itinerary optimization (MIO)—a novel Bing Maps service that automates the process of building itineraries for multiple agents while optimizing their routes to minimize travel time or distance. MIO can be used by organizations with a fleet of vehicles and drivers, mobile salesforce, or a team of personnel in the field, to maximize workforce efficiency. It supports a variety of constraints, such as service time windows, duration, priority, pickup and delivery dependencies, and vehicle capacity. MIO also considers traffic conditions between locations, resulting in algorithmic challenges at multiple levels (e.g., calculating time-dependent travel-time distance matrices at scale and scheduling services for multiple agents).

To support an end-to-end cloud service with turnaround times of a few seconds, our algorithm design targets a sweet spot between accuracy and performance. Toward that end, we build a scalable approach based on the ALNS metaheuristic. Our experiments show that accounting for traffic significantly improves solution quality: MIO finds efficient routes that avoid late arrivals, whereas traffic-agnostic approaches result in a 15% increase in the combined travel time and the lateness of an arrival. Furthermore, our approach generates itineraries with substantially higher quality than a cutting-edge heuristic (LKH), with faster running times for large instances.

Back to Top

1. Introduction

Route planning and service dispatch operations are a time-consuming manual process for many businesses. This manual process rarely finds efficient solutions, especially ones that must account for traffic, service time windows, and other complicated real-world constraints. Additionally, scale becomes a challenge: service dispatch planning may involve multiple vehicles that need to be routed between numerous locations over periods of multiple days.

The development of large-scale Internet mapping services, such as Google and Bing Maps, creates an opportunity for solving route planning problems automatically, as a cloud service. Large amounts of data regarding geolocations, travel history, etc., are being stored in enterprise clouds and can in principle be exploited for deriving customized itineraries for multiple agents. The goal of such automation is to increase operation efficiency, by determining these itineraries faster (with less man-in-the-loop) and with better quality compared to manually produced schedules. However, multiple challenges must be solved to make this vision a reality.

First, route planning requires efficiently calculating the travel-time matrices between different locations. Although the problem is well understood for free-flow travel times (i.e., no traffic),13, 8 producing the traffic-aware travel times on-demand and for any point in time requires careful attention to algorithmic and system scalability. Second, route planning itself must consider multiple features—time windows, priority, amount of time spent in each location (e.g., service duration or dwell time), vehicle capacity, pickup and delivery ordering, and the predicted traffic between locations. The single agent version without all these complex constraints corresponds to the traveling salesman problem (TSP), which is already NP-hard. Numerous extensions to TSP have been studied in operations research and related disciplines under the vehicle routing problem (VRP).15, 16, 18, 22 However, the bulk of the work is not readily extensible to account for traffic between locations, especially at a large scale. To address customer requirements, our service must incorporate traffic and output an optimized schedule within seconds for instances with hundreds of waypoints.

In this paper, we describe multi-itinerary optimization (MIO), a recently deployed Bing Maps service, available for public use.2 The design of MIO tackles the above algorithmic challenges, as well as underlying engineering requirements (e.g., efficient use of cloud resources). In particular, our solution consists of a structured pipeline of advanced algorithms. At the lowest layer, we compute travel-time matrices by combining contraction hierarchies (CH)13 with traffic predictions, resulting in an efficient time-dependent shortest-path algorithm. We then use these matrices for itinerary optimization. Our algorithm for itinerary optimization is built on a popular metaheuristic, adaptive large neighborhood search (ALNS),21 which pursues an optimal schedule by judiciously choosing between multiple search operators (i.e., repair and destroy). Our search operators have been carefully designed to account for traffic and heterogeneous agents. The entire pipeline is implemented as a cloud service, which is easily accessible through a flexible REST architecture.

We perform extensive evaluations to examine the quality of our end-to-end solution. In particular, we first highlight the significance of accounting for traffic in route planning. Our experiments find that when traffic is ignored during planning, up to 40% of the planned work (e.g., services or dwell time at waypoints) violates its time window constraints when traffic is reintroduced. Furthermore, the combined travel time and lateness of an arrival is 15% higher on average than our traffic-aware approach. On the other hand, using conservative travel times (i.e., the maximum travel between locations) results in schedules with up to 10% more travel time. Next, we compare MIO to a state-of-the-art heuristic, Lin-Kernighan-Helsgaun (LKH).15 Our results indicate that MIO obtains significantly higher quality solutions in terms of services satisfied on-time, with less processing time—MIO runs 2x faster than LKH on our publicly released instances that include around 1000 locations.7

Although related commercial offerings exist (e.g., see Section 5 for an overview), companies typically obscure some details of the algorithms and/or use some algorithmic components as a black box (e.g., the travel-time calculations). To the best of our knowledge, this paper is the first to report the full algorithmic details of an end-to-end cloud service that produces traffic-aware itineraries for multiple agents. The rest of the paper is organized as follows. Section 2 provides some necessary background; Section 3 outlines the details of our algorithms, as well of our cloud deployment. Section 4 describes our experiments and results. We survey related work in Section 5 and conclude in Section 6.

Back to Top

2. Background and Motivation

*  2.1. Internet mapping services

Recent innovations in internet mapping services have created many new business opportunities for large cloud providers such as Microsoft and Google. For example, in the business-to-consumer space, location-based services can be used for more personalized user experience and better targeted advertising, among other things. In the enterprise resource planning context, tasks such as fleet management (e.g., pickup and delivery trucks) and workforce scheduling (e.g., dispatching and routing field technicians) can benefit from automated geographic information system (GIS) services built on accurate and up-to-date geospatial data. However, in the age of the internet, users of mapping services have high expectations. For example, a “large” scheduling task (say, hundreds to thousands of way-points) is expected to complete within a few minutes, and smaller tasks (tens of waypoints) within seconds. From the cloud provider perspective, these expectations translate to service-level objectives (SLOs) on the end-to-end response time. Additionally, providers wish to generate high-quality solutions on predefined metrics (e.g., minimize the travel time or fuel consumption), while using compute resources efficiently.

*  2.2. Multi-itinerary optimization

Bing Maps recently deployed an enterprise-level planning service called multi-itinerary optimization (MIO).2 The term “itinerary” corresponds to a single agent (e.g., truck or technician) and has carried over from an earlier consumer version for vacation planning. MIO takes as input a set of waypoints, that is, lat/long locations, each with their own requirements; the requirements may include dwell time, time window, priority, pickup locations, and quantity of goods. The other part of the input is a set of agents. Each agent is characterized by a start/end location, available time window, and vehicle capacity (when relevant). Given this input, the goal of MIO is to find a feasible assignment (i.e., respecting constraints) of a subset of waypoints to the given agents, which maximizes some system objective. For example, a popular objective is to maximize the number (or priority) of waypoints visited while minimizing the total travel time.

*  2.3. Design challenges

The implementation of MIO as a cloud service has multiple levels of complexity. First and foremost, the service must scale robustly to client demand. Our travel-time calculations involve taking a set of lat/long locations (anywhere in the world), snapping them to the road network, and quickly calculating pairwise shortest travel distance while accounting for traffic. This has required significant algorithmic and engineering effort to support large volumes of users and waypoints. On top of that, incorporating predictive traffic into route planning poses an additional level of complexity to a problem that is already NP-hard. One may wonder whether it is really necessary to account for traffic. Figure 1 demonstrates time-dependent travel times for a single origin-to-destination route (Seattle downtown to Microsoft campus). Notice that the travel time varies considerably throughout the day due to traffic, particularly during peak times (8–9 AM and 3–6 PM). With such significant variations in travel times, it is essential to explicitly account for traffic. Our experiments in Section 4.2 provide a quantitative analysis of this intuition and highlight potential business ramifications when traffic is ignored.

f1.jpg
Figure 1. Time-dependent traffic profile for an example route.

Back to Top

3. MIO Design

In this section, we provide the details of MIO’s design. Section 3.1 describes how we efficiently calculate travel-time matrices that account for traffic. These travel-time matrices serve as input to our route planning optimization (Section 3.2). In Section 3.3, we highlight some engineering choices that make MIO an efficient cloud service.

*  3.1. Travel-time calculations

Given a set of locations, a distance matrix is a two-dimensional matrix constructed by calculating the length of the shortest path between each pair of locations. The shortest path can in principle represent different metrics, such as physical distance, travel time, and cost. By convention, we use the term distance matrix when the shortest path minimizes free-flow travel time (i.e., no traffic). A traffic matrix adds an extra dimension of time, where the shortest path minimizes time-dependent travel over a given time horizon.

For efficient calculation, our algorithm for generating a traffic matrix uses an associated distance matrix as a baseline and extends it to time-dependent travel times using precomputed predictive traffic. There are several implementations for fast distance matrix calculations based on hub labels8 or contraction hierarchies.13 Contraction hierarchies is an efficient approach (in data size, preprocessing, and query speed) for distance calculations in road networks. It dramatically reduces the query time required to calculate shortest-path distances by performing a preprocessing step. The preprocessing step generates a multilayered node hierarchy (vertex levels) formed by a ‘contraction’ step, which temporarily removes nodes and adds ‘shortcuts’ to preserve correctness. Figure 2 shows a simple example. Our challenge was to integrate this method of fast computation of travel times, while taking into account traffic fluctuations.

f2.jpg
Figure 2. An example CH graph.24 Solid lines represent the original road network. A contraction step temporarily removes nodes (highlighted regions) and adds shortcuts (dashed line). Gray arrows show which edges are combined.

Input and offline processing. Our system combines many sources of traffic-related input (e.g., GPS traces) in order to estimate the travel times along every road at any moment in time. To give accurate day-of-week traffic predictions (with 15-minute granularity), we aggregate these travel times for each road, using six months of historical data—giving more weight to recent travel.

After preprocessing our road graph with contraction hierarchies (CH), we compute the travel time for every contraction offline, based on the predictive traffic data and other graph properties such as turn restrictions or turn costs; see Figure 3. As a result of this offline process, we have two outputs:

f3.jpg
Figure 3. Offline pipeline for traffic matrix service.

  1. The CH graph (i.e., shortcut graph and vertex levels)
  2. The predictive traffic data for each edge in the CH graph containing 672 values (7 days x 24 hours x 4 quarter hours)

At query time, the CH graph is loaded in memory (~3GB for Western North America; ~85GB for the whole world) and the predictive traffic data is read from SSD using memory-mapped files (~100GB for Western North America; ~5TB for the whole world). The query to calculate the traffic matrix for a given time horizon uses a time-dependent shortest-path algorithm based on bidirectional Dijkstra.19

Time-dependent shortest path. In general, the time-dependent shortest-path problem is at least NP-hard; however, under certain assumptions, it can be solved efficiently with polynomial-time algorithms.9, 14, 11 In particular, we assume the FIFO property, which essentially implies that later departures have later arrivals.

There are several possible objectives for the time-dependent shortest-path problem. For example, if one minimizes travel duration, the solution may wait at the origin until traffic has subsided, whereas when minimizing the total travel time, the solution may wait at any road junction to avoid traffic. In our design, we minimize for arrival time, which avoids waiting and is a common goal in practice. Regardless of the variant, the solution to the time-dependent shortest-path problem is a distance function, parameterized by a starting ‘dispatch’ time. Our algorithm has been specifically designed to produce a (three-dimensional) traffic matrix, that is, a piecewise-constant time-dependent distance function (discretized into 15-minute intervals) for each pair of locations. We base our time-dependent shortest-path algorithm on a bidirectional Dijkstra algorithm, in order to quickly calculate all time-dependent distances from a single origin to many destinations simultaneously.

Algorithm details. CH is applied over the road network and the resulting shortcut graph and vertex levels are used. Predicted travel times are stored in 15-minute intervals over one week (for a total of 672 values). The CH graph is loaded into RAM, and traffic predictions are accessed via memory-mapped files on SSD.

A traffic matrix request of N waypoints over a time horizon with T intervals performs N individual one-to-N time-dependent shortest-path queries (one for each origin, for a total of N x N x T travel times). The CH graph is used to expedite the bidirectional (forward/backward) search, as it is sufficient to only explore nodes that have a higher vertex level (i.e., greater importance).

In the forward search, starting from the origin, vertices are explored in the order of their level, by following outgoing hops (edges). For each vertex encountered, the time-dependent travel time is calculated from the origin, and the shortest travel times for each vertex (and requested time intervals) are cached. To improve query speed, the forward search is bounded by the number of hops from the origin (2x maximum number of shortcut edges in a contraction) and distance traveled (10x free-flow travel time from origin to farthest destination).

For each destination, a backward Dijkstra-like search is performed, again by only visiting nodes (via incoming edges) with higher vertex levels in the CH graph and using free-flow travel time to determine the shortest path. Once a vertex is reached by the backward search that has been seen during forward search, the backward route from the found vertex to the destination and the travel times (with predicted traffic) are calculated. Each resulting travel time at each interval is then compared with previously found travel times (initially set to a very high value)—if any are smaller than what has already been found, then the results array is updated. After a fixed maximum number of rounds have been performed without finding a smaller travel time for any interval, the backward search is stopped.

After all destinations have been processed, the algorithm returns the N x T shortest travel times for the given origin node and time horizon.

We note that using free-flow travel times to guide the backward search may result in an approximation to the time-dependent shortest path; however, this is unlikely to occur in practice due to the continued search for better solutions after the first intersection between the forward and backward graphs was found.

Performance. We evaluated the traffic matrix performance on a query set of 1200 instances consisting of waypoints from Germany. These queries vary by the distance between waypoints and the matrix request size, with 100 queries per variant. Each query was run with a time horizon of both 96 and 672 intervals (a single day and a full week, respectively), and the output is as shown in Table 1. The average response times were partitioned by distance, number of intervals, and the size of the request matrix—where cold indicates the response time when traffic data was read from SSD, and warm indicates the response time when the traffic data was already cached in memory.

t1.jpg
Table 1. Traffic matrix cold/warm response times.

Observe that distance has little impact on the warm response time; however, it significantly impacts the cold. This is due to less vertices being shared between the routes, causing more SSD trips to fetch the traffic data, as well as requiring a longer forward exploration phase. On the other hand, the warm response time seems to be most impacted by the number of intervals—requesting 672 intervals is clearly more expensive than 96 intervals for both cold and warm, but the warm response time for the full week is almost equivalent to the cold. Finally, changes in matrix size impacts performance as expected—all experiments were run single threaded, hence the linear relationship between the single source matrix (e.g., 1 x 10) and the multiple source matrix (e.g., 10 x 10); this is most noticeable for the warm response time (e.g., 0.03 and 0.3 s, respectively).

*  3.2. Itinerary optimization

Equipped with traffic matrices, MIO finds efficient solutions to various different routing scenarios and objectives. At its core, it schedules agents to visit a set of locations. Each location is visited at most once, by at most one agent, unless the location is a depot or supports pickups/dropoffs. The dwell time at each location is given and must be completed within the specified time window at that location. Agents may arrive early, but they must wait until the time window before starting their service. Vehicles may have a limited capacity when delivering goods from a depot, or moving items from pickup locations. It is currently assumed that items cannot be split and cannot be transferred to other agents or routes by an intermediate location. Our default objective is to maximize the number of visited locations, weighted by priority, while minimizing the total travel time of the agents.

MIO algorithmic approach. Adaptive large neighborhood search (ALNS) is a popular metaheuristic that has been used for many optimization problems and in particular for vehicle routing problems. ALNS uses a simulated annealing framework, with a local search at each iteration that adapts its behavior based on previous iterations. This local search is controlled by randomly choosing a pair of destroy/repair operations (from a predefined set). Then, starting from a feasible solution, the destroy “local search” modifies the solution, such that the solution may become infeasible. The repair operation then alters the solution with a guarantee of feasibility. If the new solution is better than the previous, then the probability of choosing these destroy/repair operations will increase. This is the adaptive learning component of ALNS. Our implementation follows a similar approach as outlined in Pisinger and Ropke,21 and in addition, we support parallel processing and multiple objectives (via an automatic weighting scheme). See Algorithm 1 for a high-level overview of the approach. The termination condition for ALNS is met when either the incumbent solution has not changed within the last 5000 iterations after we reach a minimum temperature (i.e., we are confident that we explored enough of the search space and the solution did not change) or a fixed timeout, given by a function of agents and locations, has been reached. At the end of this process, we apply a k-opt swap post-optimization step on our solution, in a final attempt to improve its quality.

The most important component for efficient computation is the design of the destroy and repair operators. These guide the local search and attempt to exploit the structure of the combinatorial optimization problem. A careful balance must be considered—a trade-off between fast iterations and intelligent decisions. Here, we list our set of operators, with a brief description of their intent.

Destroy methods SKIPS: removes visits where the cost of reaching the next location is greater than going directly from the previous one; HIGH COST: removes the most ‘expensive’ (e.g., in terms of travel distance) locations; REPLACE ITEM: removes a location and replaces it with another suitable one; NEIGHBOURHOOD: picks a random location and then removes the locations in its neighborhood; TRANSFERS: removes locations that would have a lower cost in the itinerary of other agents.

Repair methods EARLY: prefers to insert random locations near the beginning of an agent’s schedule; LATE: prefers to insert random locations near the end of an agent’s schedule; NN: inserts locations using the nearest neighbor heuristic; greedy: inserts locations using a GREEDY algorithm.

Algorithm 1: High-level ALNS algorithm

ins01.gif

*  3.3. Cloud deployment and engineering

The design of MIO as a cloud service must consider hosting costs and resource usage. Large amounts of data must be stored in memory for quick access (hub labels and the CH graph ~200GB) and on SSD (traffic data ~5TB), and many underlying virtual machines (VMs) may be required to support a large volume of requests. A simple implementation would consist of a single VM role that hosts the entire service, where the number of the VMs can be scaled based on request volume. However, we can exploit the special structure of our service for a more resource-efficient architecture. From an operational perspective, we have three different components that support the entirety of the algorithms described earlier in this section:

  1. Distance Matrix requires a lot of memory to host the hub labels.
  2. Traffic Matrix needs little memory to host the CH graph. However, the component is CPU intensive and requires a lot of SSD storage for the traffic data.
  3. Route Optimization does not persist any data in memory; the computation itself is CPU intensive.

Accordingly, the MIO system includes three different VM roles, one for each of the above components (see Figure 4 for a high-level architecture overview). The roles communicate via binary HTTP protocol. Each component can be separately scaled up/down based on current load conditions. Specifically, the distance matrix role is hosted on memory-optimized VMs1; because our design achieves high throughput, we rarely need to scale this component. For the traffic matrix, SSD storage needs to be provisioned for the maximum load conditions. In addition, we use compute-optimized VMs1 for its computation; these VMs can be scaled quickly to reduce operating costs. Similarly, the route optimization role is also hosted on compute-optimized VMs that are scaled as necessary. Clearly, the alternative implementation with a single VM role would not be able to achieve this level of flexibility and efficiency—any scale up is likely to result in resource wastage in at least one dimension.

f4.jpg
Figure 4. System architecture. Route optimization can take either traffic or distance matrix as input.

To enable the above, each role is composed of:

  1. A dynamic scale set of VMs that scales based on a custom performance counter; the counter is tuned to the needs of the specific role.
  2. An Azure load balancer that spreads the requests to the VM scale set.
  3. An Azure traffic manager that handles the distribution of requests to the closest datacenter, as well as failover management in case of outage.

We conclude this section by briefly describing the interface of MIO. Some applications need to calculate small schedules with very little latency. Although other applications have larger problems and more lenient time frames, they would like to run the service in the background while displaying perhaps a progress bar for the optimization. To address these different needs, we have designed MIO with both synchronous and asynchronous interfaces (i.e., for the former and latter applications, respectively). The synchronous interface receives an optimization request and responds with an optimized itinerary, whereas the asynchronous interface returns a callback id and schedules the job to be executed in the background. An Azure queue is used to orchestrate the execution of the background job on the service side, and the resulting itinerary is saved in Azure storage. When the client uses the callback id to poll for the completion of the job, MIO checks if the optimization result exists in Azure storage and, if so, sends the output back to the application.

Back to Top

4. Experimental Results

*  4.1. Experiment setup

To evaluate the impact of traffic and the performance of MIO compared to state-of-the-art heuristics, we performed a large-scale computational study. In particular, we use the publicly available instances7 defined in a recent Microsoft Research project17. These instances have been designed to realistically model the technician routing and scheduling problem with uncertain service duration. The 1440 instances use real-world locations and include a simplified traffic matrix obtained from our service. They cover a range of different sizes and have multiple agents with various shift starting times.

For a fair comparison of our algorithms, we use the expected value of the service duration and ignore instances where it is impossible to visit all waypoints when using maximum travel times. Accordingly, our optimization goal is to visit all waypoints while minimizing the travel time. If an algorithm cannot guarantee that time windows are satisfied (i.e., the agent is late to a waypoint), we prioritize minimizing lateness over minimizing travel time.

We compare traffic-agnostic and traffic-aware variants of MIO to heuristics based on Lin-Kernighan. These heuristics have been specifically designed to quickly find high-quality solutions to symmetric TSPs. For our experiments, we use LKH v3.0.6,15 a state-of-the-art extension to Lin-Kernighan that supports constrained optimization of asymmetric TSPs. LKH has been designed to optimize many different VRP scenarios and has had great success at incredibly large-scale instances.

We configured LKH as a capacitated vehicle routing problem with time windows (CVRPTW), which also supports dwell times at locations. Demands were set to zero. The maximum travel time between pairwise locations was used to avoid time-window penalties when calculating the true cost with traffic. Default parameters were used, with the addition of “SPECIAL”—we found that excluding this parameter increased the computation time for some instances by over 1500x, with little difference in solution quality.

All experiments ran on an Azure Standard DS14v2 VM instance having 16 cores and 120GB RAM. Although MIO can exploit multiple cores, we restrict all algorithms to a single thread for a fair comparison. Experiments of the same algorithm were run in parallel on the same VM instance (i.e., up to 16 experiments running concurrently) for faster throughput.

*  4.2. Experiment design

In our experiments, we wish to examine the significance of incorporating traffic. To that end, we run MIO in its standard mode with traffic predictions (MIO) and compare the results to settings where the traffic predictions are replaced with static travel times. In particular, we execute MIO under two variants of traffic-agnostic settings:

  • MIN: minimum time between locations (i.e., traffic-free)
  • MAX: maximum time between locations (i.e., peak traffic over a one-week period)

We note that using minimum travel, as in MIN, has historically been the most common approach, as free-flow travel times can be accurately estimated using only distance and the enforced speed limits—instead of requiring the large volumes of traffic data (see Section 3.1.1). As discussed above, we compare the different MIO variants with the LKH algorithm, using maximum travel between locations (LKH).

By the design of our experiments, it is possible to visit all waypoints within their respective time windows; however, the traffic-agnostic solutions may result in scheduling a waypoint that violates its time window. That is, the agent would arrive late when reintroducing the actual travel time (capturing the predicted traffic). When such a violation occurs, we record the lateness of the agent and increase the count of violations. More specifically, MAX and MIO do not violate any time windows (i.e., the agent is never late), whereas MIN and LKH might. MIN is expected to violate time windows, as it is unaware of extreme traffic conditions. lkh seems to treat time windows as soft constraints; thus, some lateness is expected, although we configure the algorithm to prioritize arriving on time.

*  4.3. Results

A summary of our results is given in Figure 5; breakdowns by instance size are provided in Table 2 and Figure 6. All results have been normalized by the number of waypoints in an instance, that is, they represent average cost per waypoint. For example, Figure 5 implies that for every additional waypoint MIO requires an additional 17.1 minute travel time and takes an extra 201 ms runtime to solve (on average). This allows a fair comparison of instances with different sizes. In particular, Figure 6 highlights the “economy of scale” of large instances using the same geographic area—travel cost per waypoint decreases with the number of waypoints, which is expected as agents need to travel less between locations.

t2.jpg
Table 2. Comparison of results per waypoint visited.

f5.jpg
Figure 5. Comparison of algorithms over the average travel time and lateness for each waypoint visited.

f6.jpg
Figure 6. Comparison of algorithms, with results averaged over each waypoint visited.

As can be seen from the results, MAX requires over 6% more travel per waypoint than MIO, which incorporates traffic. Nonetheless, the average runtime of MAX was the fastest of all algorithms. This is due to the ultra quick cache hit of static travel times; that is, MAX does not need a binary search for time-of-day lookup; in addition, using maximum travel leads to fewer valid routes for the algorithm to evaluate. In practice, using maximum travel time is not recommended with a hard time-window constraint, as waypoints might never be scheduled, because they “appear” to be impossible to reach within the time window. In contrast, MIN unknowingly compromises travel time with lateness. Consequently, its travel time is 1% less than MIO on average, but its combined travel and lateness is almost 15% larger.

Finally, we compare our results with LKH. For smaller instances, LKH is extraordinarily fast, being, on average, over 17x faster than our MIO variants—but this changes at scale, with MIO being almost 2x faster than LKH. Unfortunately, LKH seems to struggle finding high-quality solutions across the board. As LKH uses maximum travel times, we expect the results to be similar to MAX; however, LKH seems to generate schedules with significantly larger lateness. The combined travel and lateness is over 72% higher than MIO, that is, almost 2x greater on average. Careful inspection of our configuration and the publicly available LKH code suggests that this particular issue with low-quality solutions is systemic to the LKH algorithm. Although we could not identify anything incorrect with their approach, we did not conduct a full analysis to see if the issues are inherent to the Lin-Kernighan approach, or within the extensions added by LKH.

Back to Top

5. Related Work

Most literature relating to vehicle routing problems (VRP) makes the assumption that the travel time between locations is given as input.23 Often, these problems have a fixed set of locations (i.e., the distance matrix can be cached), but in general, the effort to optimize the VRP is significantly complex so that the shortest-path travel-time calculations are typically ignored. We, in contrast, pay close attention not only to the route optimization but also to the efficient calculation of travel times. Our system provides a complete end-to-end service that supports waypoints anywhere across the globe and aims to give a high-quality solution within a very short time frame. In our service, we have implemented the necessary distance and traffic calculations between locations, which are algorithmically nontrivial.

*  5.1. Related commercial products

Enterprises offer related cloud services, such as Microsoft Dynamics 365 for Field Service (Resource Scheduling Optimization),4 Routific Routing Engine,5 Google Maps Directions,3 and TomTom Routing.6 Such commercial offerings typically do not disclose the full details of the algorithms and/or use some algorithmic components as black box (e.g., the travel-time calculations). To the best of our knowledge, this paper is the first to report the full algorithmic details of an end-to-end cloud service that supports traffic-aware itineraries for multiple agents.

*  5.2. Time-dependent shortest path

For an overview of query acceleration techniques for time-dependent shortest-path algorithms, we refer to Delling and Wagner.10 An approach similar to our traffic matrix algorithm was described in Geisberger12, who also uses contraction hierarchies with a modified Dijkstra algorithm to calculate time-dependent shortest travel between a single origin and multiple destinations.

*  5.3. The vehicle routing problem (VRP)

MIO was originally designed to solve the technician routing and scheduling problem (TRSP), which is a variant of VRP. The main extension of TRSP over VRP is the dwell time at each location, which can be different for each agent (i.e., based on skill/experience). Since our first design, MIO has been extended to include additional VRP features, such as vehicle capacity, pickup and delivery, multiple depots, and more. The ALNS method was originally introduced in Pisinger and Ropke21 as a general-purpose approach for solving VRPs and has been used in numerous papers, for example, to solve the TRSP.16, 20

Back to Top

6. Conclusion

In this paper, we described multi-itinerary optimization (MIO)—a Bing Maps service that is publicly available worldwide. MIO takes a list of agents and desired way-points with various requirements and outputs an efficient schedule and route for each agent. MIO encompasses a variety of algorithms, such as shortest-path mechanisms for travel-time calculation and a carefully designed heuristic for route optimization. Importantly, our algorithms account for traffic predictions, which have significant impact in urban areas. Our experiments show that MIO achieves efficient solutions with fast performance—it produces high-quality schedules at scale with running times better than a state-of-the-art heuristic (LKH) approach. We see MIO as an attractive tool that can be used to automate relevant logistics operations of numerous businesses and organizations.

 

    1. Azure pricing. https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows.

    2. Bing Maps Multi-Itinerary Optimization. https://microsoft.com/en-us/maps/fleet-management.

    3. Google Maps Routes. https://cloud.google.com/maps-platform/routes.

    4. Resource Scheduling Optimization (RSO). https://dynamics.microsoft.com/en-us/field-service.

    5. Routific. https://routific.com.

    6. TomTom Routing API. https://developer.tomtom.com/routing-api.

    7. TRSP instances. https://github.com/microsoft/trsp.

    8. Abraham, I., Delling, D., Goldberg, A.V., Werneck, R.F. A hub-based labeling algorithm for shortest paths in road networks. In Experimental Algorithms, P. M. Pardalos and S. Rebennack, eds. Lecture Notes in Computer Science (2011), Springer, Berlin Heidelberg, 230–241.

    9. Dean, B.C. Shortest paths in FIFO time-dependent networks: Theory and algorithms. Rapport Technique, 2004, 13.

    10. Delling, D., Wagner, D. Time-dependent route planning. In Robust and Online Large-Scale Optimization: Models and Techniques for Transportation Systems, R. K. Ahuja, R. H. Möhring, and C. D. Zaroliagis, eds. Lecture Notes in Computer Science (2009), Springer, Berlin, Heidelberg, 207–230.

    11. Foschini, L., Hershberger, J., Suri, S. On the complexity of time-dependent shortest paths. Algorithmica 4, 68 (2014), 1075–1097.

    12. Geisberger, R. Engineering time-dependent one-to-all computation. arXiv:1010.0809 [cs] (2010).

    13. Geisberger, R., Sanders, P., Schultes, D., Delling, D. Contraction hierarchies: Faster and simpler hierarchical routing in road networks. In Experimental Algorithms, C. C. McGeoch, ed. Volume 5038 (2008), Springer, Berlin, Heidelberg, 319–333.

    14. He, E., Boland, N., Nemhauser, G., Savelsbergh, M. Computational complexity of time-dependent shortest path problems. Optimization Online (2019), 12.

    15. Helsgaun, K. An Extension of the Lin-Kernighan-Helsgaun TSP Solver for Constrained Traveling Salesman and Vehicle Routing Problems. (2017).

    16. Kovacs, A.A., Parragh, S.N., Doerner, K.F., Hartl, R.F. Adaptive large neighborhood search for service technician routing and scheduling problems. J. Schedul. 5, 15 (2012), 579–600.

    17. Marshall, L., Tankayev, T. Practical risk modeling for the stochastic technician routing and scheduling problem. Optim. Online (2020).

    18. Miranda, D.M., Conceição, S.V. The vehicle routing problem with hard time windows and stochastic travel and service time. Exp. Syst. Appl., 64 (2016), 104–116.

    19. Nicholson, T.A.J. Finding the shortest route between two points in a network. Comput. J. 3, 9 (1966), 275–280.

    20. Pillac, V., Guéret, C., Medaglia, A.L. A parallel matheuristic for the technician routing and scheduling problem. Optim. Lett. 7, 7 (2013), 1525–1535.

    21. Pisinger, D., Ropke, S. A general heuristic for vehicle routing problems. Comput. Oper. Res. 8, 34 (2007), 2403–2435.

    22. Taş, D., Dellaert, N., van Woensel, T., de Kok, T. Vehicle routing problem with stochastic travel times including soft time windows and service costs. Comput. Oper. Res. 1, 40 (2013), 214–224.

    23. Ticha, H.B., Absi, N., Feillet, D., Quilliot, A. Vehicle routing problems with road-network information: State of the art. Networks 3, 72 (2018), 393–406.

    24. Wikipedia contributors. Contraction hierarchies—Wikipedia, the free encyclopedia, 2020. Online [Accessed: 17-March-2020].

    The original version of this paper is entitled "Multi-Itinerary Optimization as Cloud Service (Industrial Paper)" and was published in the Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More