In this paper we consider the problem of estimating the relative performance of a given set of related algorithms. The predominant, general approach of doing so involves executing each algorithm instance multiple times, and computing independent estimates based on the performance observations made for each of them. A single execution might be expensive, making this a time-consuming process. We show how an algorithm in general can be viewed as a distribution over executions; and its performance as the expectation of some measure of desirability of an execution, over this distribution. Subsequently, we describe how Importance Sampling can be used to generalize performance observations across algorithms with partially overlapping distributions, amortizing the cost of obtaining them. Finally, we implement the proposed approach as a Proof of Concept and validate it experimentally.
Adriaensen, S, Moons, F & Nowe, A 2017, An Importance Sampling Approach to the Estimation of Algorithm Performance in Automated Algorithm Design. in DE Kvasov, YD Sergeyev, R Battiti, R Battiti, DE Kvasov & YD Sergeyev (eds), Learning and Intelligent Optimization - 11th International Conference, LION 11, Revised Selected Papers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10556 LNCS, Springer, pp. 3-17, Learning and Intelligent Optimization, Nizhny Novgorod, Russian Federation, 19/06/17. https://doi.org/10.1007/978-3-319-69404-7_1
Adriaensen, S., Moons, F., & Nowe, A. (2017). An Importance Sampling Approach to the Estimation of Algorithm Performance in Automated Algorithm Design. In D. E. Kvasov, Y. D. Sergeyev, R. Battiti, R. Battiti, D. E. Kvasov, & Y. D. Sergeyev (Eds.), Learning and Intelligent Optimization - 11th International Conference, LION 11, Revised Selected Papers (pp. 3-17). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10556 LNCS). Springer. https://doi.org/10.1007/978-3-319-69404-7_1
@inproceedings{636cd8a4989841af8e9f2d674c3d5eef,
title = "An Importance Sampling Approach to the Estimation of Algorithm Performance in Automated Algorithm Design",
abstract = "In this paper we consider the problem of estimating the relative performance of a given set of related algorithms. The predominant, general approach of doing so involves executing each algorithm instance multiple times, and computing independent estimates based on the performance observations made for each of them. A single execution might be expensive, making this a time-consuming process. We show how an algorithm in general can be viewed as a distribution over executions; and its performance as the expectation of some measure of desirability of an execution, over this distribution. Subsequently, we describe how Importance Sampling can be used to generalize performance observations across algorithms with partially overlapping distributions, amortizing the cost of obtaining them. Finally, we implement the proposed approach as a Proof of Concept and validate it experimentally.",
keywords = "Algorithms, Automated algorithm synthesis, Importance sampling, Performance evaluation, Programming by Optimization",
author = "Steven Adriaensen and Filip Moons and Ann Nowe",
year = "2017",
month = jun,
day = "19",
doi = "10.1007/978-3-319-69404-7_1",
language = "English",
isbn = "978-3-319-69403-0",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "3--17",
editor = "Kvasov, {Dmitri E.} and Sergeyev, {Yaroslav D.} and Roberto Battiti and Roberto Battiti and Kvasov, {Dmitri E.} and Sergeyev, {Yaroslav D.}",
booktitle = "Learning and Intelligent Optimization - 11th International Conference, LION 11, Revised Selected Papers",
note = "Learning and Intelligent Optimization, LION ; Conference date: 19-06-2017 Through 21-06-2017",
}