Effective training of advanced ML models requires large amounts of labeled data, which is often scarce in scientific problems given the substantial human labor and material cost to collect labeled data. This poses a challenge on determining when and where we should deploy measuring instruments (e.g., in-situ sensors) to collect labeled data efficiently. This problem differs from traditional pool-based active learning settings in that the labeling decisions have to be made immediately after we observe the input data that come in a time series. In this paper, we develop a real-time active learning method that uses the spatial and temporal contextual information to select representative query samples in a reinforcement learning framework. To reduce the need for large training data, we further propose to transfer the policy learned from simulation data which is generated by existing physics-based models. We demonstrate the effectiveness of the proposed method by predicting streamflow and water temperature in the Delaware River Basin given a limited budget for collecting labeled data. We further study the spatial and temporal distribution of selected samples to verify the ability of this method in selecting informative samples over space and time.
|Title||Graph-based reinforcement learning for active learning in real time: An application in modeling river networks|
|Authors||Xiaowei Jia, Beiyu Lin, Jacob Aaron Zwart, Jeffrey Michael Sadler, Alison P. Appling, Samantha K. Oliver, Jordan Read|
|Publication Type||Conference Paper|
|Publication Subtype||Conference Paper|
|Record Source||USGS Publications Warehouse|
|USGS Organization||WMA - Integrated Information Dissemination Division|