Powered by OpenAIRE graph
Found an issue? Give us feedback
addClaim

Optimal data utilization for goal-oriented learning

Authors: Cowan, Charles Wesley;

Optimal data utilization for goal-oriented learning

Abstract

We are interested in the problem of utilizing collected data to inform and direct learning towards a stated goal. In this work, a controller is presented with a finite set of actions that may be sequentially (and repeatedly) taken towards the achievement of some goal. While the outcome of any action is stochastic, the result provides information about future results of that action, and potentially others. By following a rule or control policy, the controller wishes to sequentially take actions, collect information, and utilize it towards future action decisions, in such a way as to approach the stated goal. In the first model, at least one action is `best', and the goal is to identify and take such an action as frequently as possible. This requires learning the actions' underlying dynamics based on repeated observations of the stochastic results of those actions; this encapsulates the classic `exploration vs exploitation' dynamic, to test many actions, or to take only the action currently believed to be best. We derive asymptotic lower bounds on how effective any universally good policy can be, as a function of initial knowledge. Additionally, we define a generic control policy and conditions under which it is provably asymptotically optimal, and give a number of examples to illustrate the scope and application of the model. In the second model, the goal is to maximize some utility of all actions taken, e.g., total expected rewards collected. Additionally, each action has an associated breaking or halting time, which if reached ends the control process. This again captures the `exploration vs exploitation' dynamic, as the controller must balance the reward of any one action against the risk of halting and loss of opportunity for future rewards. As the goal depends on the actual results achieved, there is generally no single `best' action as in the previous model. In many contexts, we derive a dynamic `action valuation' scheme that gives rise to an optimal control policy.

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!