Multi-user app to deploy and manage HEC-RAS simulations across local Windows machines
To effectively manage a collection of 18 Windows desktop machines, optimized and used for HEC-RAS hydraulic simulations, I developed an application which allows any user on the team to remotely deploy their HEC-RAS model simulations from a simple user interface. The scenarios are ran in parallel across the system of machines by an intelligent distribution and queuing system. The GUI includes a dashboard for viewing current and past compute core utilization across the system and model runs in progress.
Crucial contributions from other colleagues were the hardware setup and optimization, and the identification of Windows protocols critical in making the app work. The concept and code were developed by myself. The stack used was a FastAPI Python backend, JavaScript/React frontend, and MongoDB database, served securely over network intranet.
FloodMap
Framework for visualizing hydraulic model results on the web
Developed a web-map application with 2 colleagues, which takes spatiotemporal hydraulic model results (HDF5) out of HEC-RAS desktop software and onto an interactive online map. Features include full quality inundation depths and other results layers, real-time USGS rain gauge data plots and precipitation radar visualization, velocity vector particle-tracing animation, and multiple scenario swipe-comparison.
I had long had the interest to extract model output from HEC-RAS and visualize it in novel and intuitive ways, to be more accessible to shareholders and the public. After being exposed to web-mapping technology and mocking up a prototype, I got the approval to begin production of the app. I developed the app's data-processing pipeline in Python to process TB of hydraulic data in a time and memory-efficient manner. Collaborated in creating the GUI, which uses only free and open-source JavaScript libraries.
Calibration Tool
Webmap application for hydraulic model calibration
Built a tool which assists hydraulic modelers in calibrating their HEC-RAS models through automated computation of calibration metrics and interactive visualization. Members of the team simply drag and drop their results from a finished simulation, a shapefile of observed gauge stations, and a csv of the observed gauge data for the desired time window, onto the shared file server. A filewatcher service kicks off a local script to sample timeseries results using Inverse Distance Weighted interpolation at each gauge location, and compute performance metrics against desired targets for calibration. It first generates an HTML file summarizing model performance for that trial. It then creates a dedicated website for the user where can view all of their computed runs for their model, the top level metrics for each run, and a webmap where they can click each gauge station to view an interactive plot of observed vs simulated results, as well as the computed metrics for that gauge within that scenario.
Adoption of the tool by the entire team has been a great success, due to the ease of use and straightforwardness of the visualization. The tool has helped me personally in the elusive and laborious feats of calibrating 2 large watershed models. I consider it the product of finding the right balance between computer automation and human intuition. I had previously tried using numerical optimization to fully automate the calibration process. However, I've since found making subtle, intentional model changes given proper engineering experience and intuitive visualizations of prior runs to be far more useful in the artform of model calibration.
Stateless HEC-RAS Cloud Architecture
Parallelizing HEC-RAS simulations in AWS
Developed architecture to deploy HEC-RAS simulation runs statelessly to Linux VM instances on the cloud (AWS). This allows for significant optimization for multiple runs, leveraging parallelization across multiple instances. The following steps are all automated. A local HEC-RAS Model is copied multiple times with parameters changed as specified for each copy. A special windows preprocessor is run for each, and the minimal set of files required to run in Linux is uploaded to an AWS S3 storage bucket. For each separate run, a Linux VM instance is created, with its unique instructions given. From here, the local python process finishes, and each VM instance is left to complete its lifecycle statelessly. It copies the appropriate HEC-RAS Linux binaries as well as its unique model files from the S3 bucket, runs its specified scenario, copies the results and run information back to the S3 directory, and finally terminates itself.
IOOS Compliance Checker
Python package for checking compliance of NetCDF files with CF conventions
Worked on software maintenance for the US Integrated Ocean Observing System (IOOS) Compliance Checker. The Compliance Checker is a utility that validates NetCDF data against the CF specification. NetCDF is an industry standard for climate and earth sciences for n-dimensional, spatiotemporal datasets. The CF spec defines legal data structures and metadata conventions to allow collaboration and data aggregation between organizations.
Contributions to the project include upgrading the test suite's framework from the legacy Unittest to Pytest, replacing boilerplate code with parameterized functions, fixtures, and marks, creating documentation for the software and CF spec for the on-boarding of new developers, and adding support for datasets in the modern, cloud-optimized Zarr format. The project is open-source and available on GitHub, linked in the above image.