A QUARTERLY PUBLICATION OF ACCS
Towards Planning in Mainstream Applications - Important Considerations


Biplav Srivastava, IBM Research
Anton V. Riabov, Chief Architect of Cloud and Data Platforms
Adi Botea, IBM Research

Abstract

Planning has been used in many industrial applications but they are still few and far between compared to other AI sub-fields like learning, constraints and (business) rules. In this paper, we highlight key considerations important in practice and articulate the issues therein which if addressed, we anticipate, will trigger a new wave of planning-based applications.

1 Introduction

There is a long history of planning being applied to hard problems like controlling space shuttle(Chien et al. 2000; Musliner and Goldman 1997) and military logistics(Myers et al. 2011). There are also a few prominent success stories of AI experts having successfully applied it to commercial situations like printing(Ruml et al. 2014), packaged software deployment(Hoffmann et al. 2012), ship navigations(Teng et al. 2017) and IT self-management(Srivastava et al. 2004).

However, planning is still not accessible to developers as other AI sub-disciplines are like machine learning, constraints and (business) rules. There have been occasional reviews of hurdles that prevent planners from getting widely adopted. In (Boddy 2011), the author discusses the representational gap between what the popular language for planning (PDDL) supports and what applications need. In (Ghallab et al. 2014), the authors note that planning has focused more on path-finding methods in a state-transition framework than issues important for executing them.

In this paper, we reflect on the experience of planning-based applications we have done to highlight considerations important in practice. We hope this will help the research community to develop more consumable planners and thereby trigger a mainstream wave of planning-based applications.

In the rest of the paper, we first explain key practical considerations for planners. Then, we summarise how they were handled in 3 planning-based applications: machine learning pipelines, journey planning and web services composition. For each case study, we identify the business problem, the planning-based solution and our approach to promote usage of planning. We then discuss what our experience means for other developers using available planners and identify avenues for improvement.

2 Background

The traditional focus of planning research has been in developing fast and expressive planners. In contrast, when applying planning in mainstream applications, there are many practical considerations that determine overall success. Some stem from the fact that planners are just another piece of software and hence, must meet well studied concerns of software engineering. Others deal with the unique nature of planning.

2.1 Licensing

Licensing in the software context deals with the legally-enforceable freedom a user has while using a software developed by another party. A license can be unencumbered (e.g., when the user organization itself develops the software for itself), severely restrictive (e.g. single-use license) or in between allowing users to make changes to them (e.g., Apache, GPL and MIT licenses). Planners participating in the AI planning competitions (IPCs)1 have traditionally released source-code for the competition but do not have to make them available to public. Some of the most popular planners like FF and Fast Downward cede some control with GPL, but they may not be usable for commercial products. In fact, most commercial usage of planning is with custom planners that have not participated in the IPCs.

2.2 Software Development Ecosystem

While planners have matured rapidly over the past two decades since Graphplan in mid-1990s, the mainstream software development ecosystem has also evolved to reduce its cost drivers – the cost of software development and maintenance. While production software were monolithic C/C++ programs in 1990s built using waterfall process, they turned to interpreted languages like Java in 2000s, adopted service oriented frameworks like web services, and became cloud-based applications in the past few years using Devops (agile) process. Planners, on the other hand, evolved from Lisp programs to C/C++ but very few public state-of-the-art planners are written in Java2 or available on cloud3. As a result, most large applications use planners via a software bridge (Java to C/++, web-based service bridge) but cannot benefit from the planner’s intermediate results (e.g., partial plans).

2.3 Knowledge engineering

This is the most critical part of using a planner where model of the domain is formalised in terms of predicates and actions, and also that of the problem(s) which the planner will be asked to solve. Despite an increase in tools for knowledge engineering, models are still most commonly built by hand. Model acquisition, debugging and maintenance can become frustration points in practice for non-planning developers. They also represent gaps needing impetus for research.

2.4 Visualization

The ability to see generated plans and understand its actions and their role in the plan, can go a long way in user’s acceptance of planning. Developers are used to visual tools from software development environments and they demand the same from planners. Unfortunately, plan visualization has not received much research interest.

2.5 Benchmarking value of planning

Any new component in an application has to prove its value and the same is true for a planner. It should be easy for developers to see the impact of planning. Therefore, guidance on experimental setup, and relevant quantitative and qualitative evaluation is valuable.

2.6 Plan execution and monitoring

The output of planners can be executed by people or by computing systems. Further, if plans fail, automated replanning methods may be needed to recover from situation and try alternatives. Furthermore, if planning-enabled applications will execute in an environment shared with humans, latter should be able to understand failure and manually re-plan if needed.

2.7 Any special features

Occasionally, applications require new features from planners for the overall system to operate effectively. As users, developers would prefer an ability to enhance a planner themselves to support what they may need, and software styles like plug-ins support them.

3 Planning for Machine Learning Pipelines

3.1 The Problem

The recent rapid growth in business applications of Big Data, machine learning and data science has generated an increasing interest in simplifying the routine steps of data analysis. A wide variety of feature extraction, dimensionality reduction and classification methods are available today, and their implementations can vary between multiple platforms (i.e., Weka, scikit-learn, R, etc.). Furthermore, each of the elements of the pipeline can be instantiated with different parameters. As a result, data scientists evaluate only a small number of analytic pipelines when analyzing a dataset.

Systems that can automatically compose, run and evaluate analytic pipelines systematically and across platforms can significantly improve over the manual process. These systems for pipeline composition must have the following properties:

1. Completeness: The system should evaluate all possible pipelines.

Figure 1: Planning For Machine Learning Pipelines

2. Correctness: Compositions must be correct, ensuring each composed pipeline is a valid program.

3. Cross-Platform Composition: The system must be able to generate, deploy and execute pipelines across multiple platforms.

4. Extensibility: New pipeline elements and platforms must be easy to integrate into the system, without significant development or engineering efforts.

5. Encapsulation: Most changes to a pipeline element should not require changes to other elements.

As shown by a significant body of work on planning-based web service composition in early 2000, planners can be used to compose web services. Further we will discuss how similar techniques can be adapted for the composition machine learning and data science pipelines.

3.2 The Planning Approach

Biem and others (Biem et al. 2015) have developed a planning-based approach for the composition of data science and machine learning pipelines (see Figure 1). This system had introduced the rigorous semantically-guided metalearning and parameter selection approach, which relies on a planning system (MARIO) for pipeline composition.

Planning domains are generated from the descriptions of pipeline elements in Cascade, a language developed specifically for this purpose. Cascade provides the following language primitives for describing the problem:

1. Abstract components, which provide a hierarchy of roles pipeline elements can play within the pipelines (e.g., Classifier abstract component can have an SVM abstract component deriving from it);
2. Concrete components, describing platform-specific implementations of abstract components (e.g., Weka Lib SVM implementation);
3. Patterns, which allow describing a high-level flow of abstract components interconnected via data inputs and outputs, for example requiring that Feature Selection component should be followed by Classifier, followed by Presentation;

Cascade descriptions of the elements of the pipelines can be mapped, for example, to HTN (Sohrabi et al. 2013), or MARIO-specific SPPL language. MARIO provides efficient pipeline composition and execution on a variety of platforms, including Weka, SPSS, R, and Apache Spark. The code that drives execution on these platforms is generated by MARIO by composing together code fragments included in concrete component descriptions in Cascade. This composition of code fragments treats code as text, without parsing the language structure, allowing MARIO to support a wide variety of target languages with minimal macro substitution used to inject parameter values and directory paths.

3.3 Discussion

Licensing. A specialized planner was developed for MARIO, which both provided better control over performance on planning tasks of interest and removed the need to deal with licensing restrictions of existing deterministic planners for business use.
Architecture and Tooling. A web-based UI to test compositions was made available to developers in MARIO, and a REST API (Application Programming Interface) was provided for integration with the learning subsystem.
Knowledge engineering. Specialized Eclipse-based IDE was developed for Cascade language, to make it easier to add new pipeline elements. More than 200 component element descriptions in Cascade have been added by the developers using this tooling.
Visualization. While the pipeline compositions were visualized graphically in developer’s web UI of MARIO, the developers have relied on generated code for more advanced debugging of Cascade components and patterns.
Benchmarking value. The system was designed to replace a manual process, and it performed demonstrably well as such. Experiments performed were end-to-end, not only evaluating the planning component, but also system performance overall, measured by quality of the composed analytic pipelines and the speed of the system in doing that. The approach showed a significant performance improvement over the manual approach, driving wide adoption of the system in the organization.
Plan execution and monitoring. Plan execution in MARIO is accomplished by mapping plans to code of the pipeline, and executing the code on the corresponding platforms. Because a central component is responsible for all code generation, it is able to detect and take advantage of reusable pipeline elements. The elements are reusable when the same element is applied to the same data in multiple pipelines as part of model selection. Monitoring involves detecting completion of the pipeline (or execution errors) and routing the results back to learning subsystem. In that process, saving the intermediate results of processing throughout the pipeline proved valuable both for debugging and for presenting the selected pipeline to the end-users.
Planner consumability. The specialized planner was tuned over several years for optimal performance on these tasks. In one case, a new search algorithm was added to address specific requirements (efficient composable pipeline enumeration).
Other considerations. In this application, value of planning was determined by overall value the system was providing by composing and selecting the best machine learning pipeline for a given task on a given data set, as quickly as possible. Integration with a wide variety of systems for plan execution and providing the best possible experience to developers providing domain knowledge, were key to the successful adoption of planning technologies.

4.Multi-Modal Journey Planning

4.1 The Problem

Computing a multi-modal itinerary is an important application of AI planning. For example, a trip in a city could involve different transport modes, such as public transport, driving, cycling, and walking. In real life, transportation networks can feature uncertainty in aspects such as the arrival and the departure times of buses, the availability of parking, the duration of a driving trip segment (leg) and others.

Today, the de facto standard is to perform deterministic journey planning. Yet, in the presence of uncertainty, predicting whether some planned connections will succeed cannot always be performed accurately. When an action, such as catching a connection, can succeed with a given probability, deterministic planning must rely on a simplifying assumption. An optimistic assumption is that the connection can be performed for sure. The disadvantage of such an assumption is that, if the connection is missed in reality, the plan becomes invalidated. Re-planning is useful only if some good alternatives happen to exist in the state where the plan gets broken.

pessimistic assumption is that the connection will always be missed. Plans computed in this manner are safer. However, their disadvantage is a lack of being opportunistic. Deterministic planning cannot distinguish between two trajectories with the same deterministic cost, even if one trajectory could feature some probabilistic opportunities, such as faster connections than the planned ones. Planning under uncertainty can provide policies (contingent plans) that are both opportunistic (including good actions that can succeed with a given probability) and safe (including good alternatives when preferred but uncertain actions fail).

4.2 The Planning Approach

Botea et al. (2013) introduce Dija, a search-based system for multi-modal journey planning in the presence of uncertainty. The search space is an AND/OR state space where part of the transitions (actions) can have non-deterministic effects. The action of boarding a public transport vehicle can succeed or fail, depending on the stochastic arrival time of the traveler at the stop, and the stochastic departure time of the vehicle.

The search is based on the AO* algorithm, and it implements heuristic functions and pruning techniques. Given a destination, for important locations on the map, such as public transport stops, admissible estimations for the travel time to the destination are pre-computed and stored as a lookup table. Similar lookup tables are populated with admissible estimates of other metrics, such as the number of legs to the destination.

The estimates on the travel time and the number of legs are used as an admissible heuristic in AO*. Admissible lookup tables can further be used for pruning. Imagine that the number of legs up to a current state in the search, plus an admissible estimation of the number of legs needed from the current state to the destination, exceed a maximum acceptable value set by the user. The state at hand can be pruned away. A similar pruning can be performed based on the cycling time and the walking time. Other types of pruning include pruning based on state dominance, and a domain specific technique that prunes away walking actions (Botea 2016).

Pruning can also be performed using state dominance. For instance, assume that a location on the map is reached in the search on two different pathways. As compared to the second one, the first pathway requires at most the same travel time, at most the same walking time, at most the same cycling time, and at most the same number of legs. Furthermore, the first pathway is reached on a sequence of deterministic transitions (i.e., following it does not depend on chance). Then the first pathway dominates the second, and the latter can be pruned away. Heuristics and pruning techniques are key in making an algorithm such as AO* scale to the size of a real city.

4.3 Discussion

Licensing. The planning engine has been developed from scratch, being free from any potential licensing issues. A customized development allowed to focus on an efficient, domain-specific implementation.

Architecture and tooling. In a deployed system, a multimodal journey planning engine is a component of a larger eco-system, which we call a multi-modal journey advisor. Besides, the planner, modules include: aggregating all network data into a so-called network snapshot; keeping track of active trips, with monitoring, detecting plan invalidation, and replanning; interaction with the user, through an API and apps; collecting data and presenting meaningful analyses to an operator.

Docit (Botea et al. 2016) is a multi-modal journey advisor that implements the functionality summarized in the previous paragraphs (see Figure 2). Dija is integrated in Docit as a multi-modal journey planner, and so is also a planning system called Frequency Planner (Nonner 2012).

Knowledge engineering: data acquisition and representation. The knowledge about a multi-modal transportation network can be static or dynamic. Static data is easier to create and more broadly available. For instance, sources such as OpenStreetMap provide static roadmap data. Data about scheduled public transport vehicles (e.g., buses and trains) can be represented in the GTFS4 format. More and more cities make static public transport data available using the GTFS format. In Docit, GTFS was extended in the following directions: represent uncertainty in the data (e.g., in the arrival and the departure times of buses); and integrate additional transport modes, such as private cars, shared cars, taxis, and bicycles in a shared network.

Dynamic data, such as traffic conditions, or updates in the actual schedules of buses, require sensing capabilities (e.g., cars reporting their speed, buses reporting their GPS location), and an expensive additional infrastructure. When Docit was deployed in cities such as Rome, Italy and Haifa, Israel, as part of the EU project Petra, dynamic updates in the public transport schedules were provided to the system.

Visualisation. Docit computes contingent journey plans, for a better robustness against failures in a network with dynamic events (see Figure 3). While contingent plans can have a tree shape in general, most users are accustomed to deterministic, sequential plans. We found that contingent plans are more difficult to explain to an audience with no AI knowledge. An interesting lesson learned from this work is that a useful feature can also be a hard-to-grasp feature. This could further creating a potential barrier, making the adoption of the feature sensitive to how well the users understand it.

Visualising in a mobile app a contingent plan, and key summary data about the contingent plan, is more challenging as compared to using sequential plans. Docit provides an API for the interaction with a mobile app. Third parties can develop apps complying with the API. Given a contingent plan, a mobile app developed at IBM Research, Ireland visualises possible arrival time, with a probability associated with each possible arrival time.

Benchmarking value: experimental, qualitative. As previous work focused on deterministic multi-modal journey planning, no benchmarking data was available about multi-modal journey planning under uncertainty. An important benefit stemming from systems such as Docit is the opportunity to learn about the performance and the potential of uncertainty-aware multi-modal journey planning. This allows comparing contingent planning and deterministic planning in terms of the quality of plans, speed of computing plans, and scalability.

Benchmarking value: experimental, qualitative. As previous work focused on deterministic multi-modal journey planning, no benchmarking data was available about multi-modal journey planning under uncertainty. An important benefit stemming from systems such as Docit is the opportunity to learn about the performance and the potential of uncertainty-aware multi-modal journey planning. This allows comparing contingent planning and deterministic planning in terms of the quality of plans, speed of computing plans, and scalability.

Plan execution and monitoring. Besides plan computation, a multi-modal journey advisor should contain other important components, such as monitoring the progress of a trip, monitoring the status of the transport network, detecting cases when a current plan is invalidated, and perform replanning. Botea and Braghin (2015) have presented a technique to simulate ahead the execution of the plan, given new information about the transportation network. The outcome helps decide whether recent changes in the transportation network, not available at the time when the journey plan was computed, impact the plan at hand. If so, notify the user and possibly trigger re-planning.

Other remarks. As AI researchers, we might be more excited about developing innovative approaches, such as moving from deterministic planning to non-deterministic planning in a domain where deterministic planning is a broadly embraced de facto standard. Yet, regardless of whether it implements a novel technology, a deployed system needs to feature many “standard” features. In multi-modal journey planning, examples of “standard” features include the following options to a user: choose between a departure time and an arrival time; choose a subset of transport modes acceptable in a trip; set max acceptable values for the total walking time, the number of legs in a journey, and the cycling time, for example; a user-friendly mobile app. The effort required to implement these is very considerable and the level of detail can be crucially important.

5.Planning for Web Services

The Problem

Commercial IT solutions were for long built as monolithic applications in an ad-hoc manner resulting in poor reuse of software assets, longer time-to-delivery and costlier software maintenance costs. Viewing software components as (web-enabled) services, the promise of automated (web) service composition is that new applications can be seamlessly assembled from existing components (services) meeting overall functional and non-functional requirements.

Services oriented computing was being implemented in early 2000s using web services standards (e.g., WSDL, SOAP and BPEL4WS) and lately, using REpresentational State Transfer (REST) APIs (Application Programming Interfaces).The relative trade-offs between the two architectural styles was studied by (Pautasso et al. 2008) based on the architectural decisions that must be made and the number of available alternatives. The authors concluded that REST is well suited for basic, ad hoc integration scenarios, while web services standards are suitable for operational service requirements of enterprise computing. The automated composition problem, however, is common regardless of implementation style. For more details on approaches for web services composition (WSC) and issues, see (Lemos et al. 2015; Srivastava and Koehler 2003).

Plan execution and monitoring. Besides plan computation, a multi-modal journey advisor should contain other important components, such as monitoring the progress of a trip, monitoring the status of the transport network, detecting cases when a current plan is invalidated, and perform replanning. Botea and Braghin (2015) have presented a technique to simulate ahead the execution of the plan, given new information about the transportation network. The outcome helps decide whether recent changes in the transportation network, not available at the time when the journey plan was computed, impact the plan at hand. If so, notify the user and possibly trigger re-planning.

Other remarks. As AI researchers, we might be more excited about developing innovative approaches, such as moving from deterministic planning to non-deterministic planning in a domain where deterministic planning is a broadly embraced de facto standard. Yet, regardless of whether it implements a novel technology, a deployed system needs to feature many “standard” features. In multi-modal journey planning, examples of “standard” features include the following options to a user: choose between a departure time and an arrival time; choose a subset of transport modes acceptable in a trip; set max acceptable values for the total walking time, the number of legs in a journey, and the cycling time, for example; a user-friendly mobile app. The effort required to implement these is very considerable and the level of detail can be crucially important.

5 Planning for Web Services


5.1 The Problem

Commercial IT solutions were for long built as monolithic applications in an ad-hoc manner resulting in poor reuse of software assets, longer time-to-delivery and costlier software maintenance costs. Viewing software components as (web-enabled) services, the promise of automated (web) service composition is that new applications can be seamlessly assembled from existing components (services) meeting overall functional and non-functional requirements.

Services oriented computing was being implemented in early 2000s using web services standards (e.g., WSDL, SOAP and BPEL4WS) and lately, using REpresentational State Transfer (REST) APIs (Application Programming Interfaces).The relative trade-offs between the two architectural styles was studied by (Pautasso et al. 2008) based on the architectural decisions that must be made and the number of available alternatives. The authors concluded that REST is well suited for basic, ad hoc integration scenarios, while web services standards are suitable for operational service requirements of enterprise computing. The automated composition problem, however, is common regardless of implementation style. For more details on approaches for web services composition (WSC) and issues, see (Lemos et al. 2015; Srivastava and Koehler 2003).

5.2 The Planning Approach

There are two key novelties in the Synth approach(Agarwal et al. 2005; Srivastava 2006) for WSC: (a) it distinguishes between web service instances that are actually deployable, and web service types which represent a group of web services that have similar functionality, and (b) it composes in two stages representing what components are needed for the composite component and how the components should be connected together to realise the needed functionality. More specifically, the functional requirements of the new service are used by goal-driven planning to automatically generate abstract plans, and then, non-functional requirements are used for concretizing the plans by optimally selecting the available service instances. This is illustrated in Fig. 4. A Service Registry contains information about services available in-house as well as with participating 3rdparty providers. The capabilities of each available service type are described formally, using domain-specific terminology that is defined in a Domain Ontology.

When a new service needs to be created, the developer provides a Service Specification to the Logical Composer (LC) module. Driven by the specified requirements, the Logical Composer uses generative planning to create a composition of the available service types. Its goal is to explore qualitatively different choices and produce an abstract workflow (i.e., satisfiable plan) that meets the specified requirements. From a planning perspective, the planner needs to be efficient in this interactive and uncertain domain, the domain could be incompletely modeled, the user has hard and soft constraints, and the number of web service types could be large. We use a contingent planner that uses optional user inputs to efficiently finds plans that matter most to them (Mediratta and Srivastava 2006).

In order to turn the plan into a concrete workflow that can be deployed and executed, specific instances were chosen for the component services in the plan. The Physical Composer (PC) queries the services registry for deployed web service instances and uses scheduling and compilation techniques in selecting the optimal web service instances to produce an executable workflow(Agarwal et al. 2005). The focus is now on quantitatively exploring the available web service instances for workflow execution.

The workflow generated by the service creation environment is then deployed onto a runtime infrastructure, and executed in an efficient and scalable manner. The system used a commercial workflow engine for the purpose.

The Synthy system, instead of producing a single plan, generated multiple plans (workflows) at logical and physical stages. This allowed PC or execution engine to be flexible and try out alternative plans, without returning to the previous stage, if one of the plans failed for any operation reason. Thus, limited failure tolerance during WSC was unique to Synthy(Agarwal et al. 2005).

5.3 Discussion

5.3.1 Licensing 
In Synthy, the planner was developed from scratch in the organization and hence, internal users had full freedom. External users needed a commercial license.

5.3.2Software Development Ecosystem
 The development was in pure Java in Eclipse.

5.3.3 Knowledge engineering
 The planning model was manually created and debugged using the planner. For the purpose, in case the problem was unsolvable due to a possible bug, the planner could be asked to generate best k partial plans closest to the goal. The semantic description of services was also manually created in Protege tool and tested using Pellet reasoner integrated with it.

5.3.4 Visualisation 
We developed a composition environment in Synthy, which can be seen as an Integrated Development Environment (IDE), using Eclipse’s plugin framework – see Figure5. A user could perform logical and physical reasoning in the same environment, view the evolving workflow (abstract, physical) in text or as a graph and check the properties of its actions.

5.3.5 Benchmarking value of planning 
The evaluation of Synthy approach was qualitative. The alternatives were manual or automatic composition approaches, and relative efforts in composing new services were compared. The individual planner (Planner4J) was tested quantitatively for performance and it could easily handle common composition scenarios easily.

5.3.6 Plan execution and monitoring
Synthy used an off-the-shelf workflow engines (BPEL4WS) for execution. The accompanying tools allowed execution monitoring and this helped implement a simple fault tolerance scheme relying on multiple workflows. Specifically, Synthy allowed multiple abstract and physical plans to be generated so that the executor, when faced with a failing plan, could switch to an alternative until all were exhausted and a re-planning had to be done. More details are in (Agarwal et al. 2005).

5.3.7 Any valued features Planner4J provided automatic parameter tuning(Srivastava and Mediratta 2005), generation of multiple plans and support for fault tolerance. While the user was developing a domain model, she could ask for partial plans to help in debugging PDDL model.

6 Discussion

Table 1 gives a summary of the three case studies.We notice that all planners used were custom built to allow maximum freedom to the consuming organization. However, this is not practical for most developers who want software under other licenses as well. Hence, there is an opportunity to innovate on licenses including per-use licenses common with APIs.

Moving to software development process, the planners were written as monolithic software in a variety of languages but were not componentized for enhancements by users. For knowledge engineering, specific environments were created or suitable converters written to help the user build planning models. Synthy additionally provided programmatic interfaces to develop planning models. Further, custom visualisations were created to help the users understand generated plans. Thus all paid attention to support users with multiple ways for knowledge engineering and plan visualisation. This is an area needing more research.

The value of planning was assessed using custom evaluations. Considering this is required by all users, there is an opportunity to standardise planner evaluations so that a consumer can switch among different planning configurations (or different planners) easily. Given the success of planning competitions which evaluate planners on performance and to some extent, plan quality, planner evaluation approaches may be standardised for consumers and enabled.

The three applications diverged in how the plans were executed depending on their unique context. One had plans executed manually (journey recommendation) while others had automated execution possible. Execution monitoring becomes challenging unless a standard plan execution approach is adopted with clear semantics (like workflows).

Finally, all applications required special features from the planner. This indicates that if developers have to use planners, they need mechanisms to customise or extend planner behavior.

7 Conclusion

This paper was motivated by the observation that although there has been tremendous progress in building efficient planners in the past years, planning-based applications are not as commonplace as other AI sub-fields like learning, constraints and (business) rules. We looked at three case studies of planning being used for mainstream applications and identified considerations that are important in practice. We argue that improvement in them by research community will help unleash mainstream adoption of planning by developers who are non-AI experts.

References

Vikas Agarwal, Girish Chafle, Koustuv Dasgupta, Neeran Karnik, Arun Kumar, Sumit Mittal, and Biplav Srivastava.
Synthy: A system for end to end composition of web services. Web Semant., 3(4):311–339, December 2005.

Alain Biem, Maria Butrico, Mark Feblowitz, Tim Klinger,Yuri Malitsky, Kenney Ng, Adam Perer, Chandra Reddy, Anton Riabov, Horst Samulowitz, Daby Sow, GeraldTesauro, and Deepak Turaga. Towards cognitive automation of data science. In AAAI’15 System Demonstrations, 2015.

Mark S. Boddy. Imperfect match: PDDL 2.1 and real applications.CoRR, abs/1110.2730, 2011.

A. Botea and S. Braghin. Contingent versus deterministicplans in multi-modal journey planning. In Proceedings of the 25th International Conference on Automated Planningand Scheduling (ICAPS), pages 268–272, 2015.

Adi Botea, Evdokia Nikolova, and Michele Berlingerio. Multi-modal journey planning in the presence of uncertainty. In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS), 2013.

A. Botea, M. Berlingerio, E. Bouillet, S. Braghin, F. Calabrese, B. Chen, Y. Gkoufas, M. Laummans, R. Nair, and T. Nonner. Smart Cities and Homes. Key Enabling Technologies, chapter Docit: An Integrated System for Risk-Averse Multi-Modal Journey Advising. Morgan Kaufmann, 2016.

Adi Botea. Hedging the risk of delays in multimodal journey planning. AI Magazine, 37(4):105–106, 2016.

S. Chien, G. Rabideau, R. Knight, R. Sherwood, B. Engelhardt, D. Mutz, T. Estlin, B. Smith, F. Fisher, T. Barrett, G. Stebbins, and D. Tran. Aspen – automated planning and scheduling for space mission operations. In in Space Ops,2000.

Malik Ghallab, Dana Nau, and Paolo Traverso. The actor’s view of automated planning and acting: A position paper. Artif. Intell., 208:1–17, March 2014.

Jrg Hoffmann, Ingo Weber, and Frank Michael Kraft. Sap speaks pddl: Exploiting a software-engineering model for planning in business process management. J. Artif. Intell. Res., 44:587–632, 2012.

Angel Lagares Lemos, Florian Daniel, and Boualem Benatallah. Web service composition: A survey of techniques and tools. ACM Comput. Surv., 48(3):33:1–33:41, December 2015.

A. Mediratta and B. Srivastava. Applying Planning in Composition of Web Services with a User-Driven Contingent Planner. Technical Report RI06002, IBM, February 2006.

David Musliner and Robert P. Goldman. Circa and the cassini saturn orbit insertion: Solving a prepositioning problem. IEEE Transactions on Systems, Man, and Cybernetics, 23:1561–1574, 1997.

K. Myers, J. Kolojejchick, C. Angiolillo, T. Cummings, T. Garvey, M. Gervasio, W. Haines, C. Jones, J. Knittel, D. Morley, W. Ommert, and S. Potter. Learning by demonstration technology for military planning and decision making: A deployment story. In Proceedings of the Conference on Innovative Applications of Artificial Intelligence (IAAI-11), 2011.

Tim Nonner. Polynomial-time approximation schemes for shortest path with alternatives. In ESA, pages 755–765, 2012.

Cesare Pautasso, Olaf Zimmermann, and Frank Leymann. Restful web services vs. big web services: Making the right architectural decision. In 17th World Wide Web Conference (WWW 2008), pages 805–814, Beijing, China, April 2008. ACM, ACM.

Wheeler Ruml, Minh Binh Do, Rong Zhou, and Markus P. J. Fromherz. On-line planning and scheduling: An application to controlling modular printers. CoRR, abs/1401.3875, 2014.

Shirin Sohrabi, Octavian Udrea, Anand Ranganathan, and Anton V. Riabov. HTN planning for the composition of stream processing applications. In Proceedings of the Twenty-Third International Conference on International Conference on Automated Planning and Scheduling, ICAPS’13, pages 443–451. AAAI Press, 2013.

Biplav Srivastava and Jana Koehler. Web Service Composition – Current Solutions and Open Problems. In In: ICAPS 2003 Workshop on Planning for Web Services, pages 28–35, 2003.

Biplav Srivastava and Anupam Mediratta. Domaindependent parameter selection of search-based algorithms compatible with user performance criteria. In Proceedings of the 20th National Conference on Artificial Intelligence – Volume 3, AAAI’05, pages 1386–1391. AAAI Press, 2005.

B. Srivastava, J. P. Bigus, and D. A. Schlosnagle. Bringing planning to business applications with able. In Proc. International Conference on Autonomic Computing (ICAC 2004), 2004.

B. Srivastava. The synthy approach for end to end web services composition: Planning with decoupled causal and resource reasoning. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI-06), 2006.

Teck-Hou Teng, Hoong Chuin Lau, and Akshat Kumar. Coordinating vessel traffic to improve safety and efficiency.
In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’17, pages 141–149, Richland, SC, 2017. International Foundation for Autonomous Agents and Multiagent Systems.