_images/LOGOS.png

Evaluation Plan

We strive to make these tools better and when we can that means cleaner water, fewer basement floods, and happier and healthier communities. Please help us in making this the best tool possible. Thanks!

Evaluating the performance of the products developed in this effort is a valuable task that greatly benefits their long-term success and use. Toward this aim, short and long-term evaluation goals are outlined here. Additionally, we discuss the best ways to provide feedback.

Giving Feedback

Feedback is the most important part of this work. Please consider actively participating in the development of this tool.

Feedback on Github Project

Providing feedback on the project’s Github page is the best and quickest way to ensure your feedback will be addressed. To do so, navigate to the project’s Issue Tracking Page and leave a comment. From this portal other users can join the discussion about the issue and how to fix it. Then, once the issue or feedback is resolved, the issue can be closed.

To learn more about using issues, refer to Github’s advice on Masterings Issues

Periodic Feedback Meetings

3-4 meetings a year should be scheduled with GLWA stakeholders to have an open discussion about what is working and what could be improved with the workflow and products.

Short-Term Evaluation Goals

The primary objective in the short-term is to improve the user experience during real precipitation events. We anticipate that the dashboard interface for communicating recommendations and the workflow that supports it will be the keys areas for improvement. In the coming months a meeting will be scheduled to discuss improvements. Give feedback here.

Long-Term Evaluation Goals

Currently, GLWA successfully treats 93-96% of all flows that enter its collection system with at least primary treatment. By this fact CSO events are rare under current conditions and, going forward, so are events that would be valuable to understand the benefit of the decision-support tool. Only over the long-term will the performance of these applications become evident. In this vein, a long-term evaluation plan was developed to inform the ongoing and future assessment of the decision-support tool.

Evaluation Metrics

During discussion with GLWA and University of Michigan team members, three performance metrics were identified as a means to assess the project’s success. These metrics are:

[1] Minimize the number of CSO events during smaller storms. Storms with relatively small total depths, yet high intensity and/or highly localized, can still prove difficult for operators to avoid CSOs. It is hypothesized that CSOs from smaller storm events will decrease after implementation of the decision-support tool.

[2] Shortened CSO duration for larger storms. The nature of the control engine is to distribute stress throughout the system proportionally to the storage assets. As a result, more storage volume should be utilized at any given moment during a storm event. By extention, it is anticipated that this will shorten the duration of CSO events during larger storms.

[3] Total CSO volume reduction. Considering the two above metrics together, total CSO volumes are expected to decrease.

Evaluation Procedure

The manner in which the applications were implemented aide in the long-term assesment. Because concurrent system data and recommendation data are stored, future analyses using these data can be done to understand how closely recommendations were followed under adoption. Likewise, historical CSO patterns pre and post recommendation adoption are available as part of GLWA databases. Combined, these two data sources will be used to perform analyses assessing the performance along the above metrics.

A plan, detailing specific methodologies for analysis, will be forthcoming at a later date.