Skip to content

Commit

Permalink
Merge pull request #4 from DHI/modelskill
Browse files Browse the repository at this point in the history
Rename fmskill -> modelskill
  • Loading branch information
ecomodeller committed Dec 18, 2023
2 parents 3dbd86d + e2ba9fd commit 9590ec5
Show file tree
Hide file tree
Showing 17 changed files with 5,296 additions and 1,056 deletions.
2 changes: 2 additions & 0 deletions Makefile
@@ -0,0 +1,2 @@
all:
jupyter-book build mini_book
4 changes: 2 additions & 2 deletions README.md
@@ -1,4 +1,4 @@
# Python for marine modelers using [MIKE IO](https://github.com/DHI/mikeio) and [FMskill](https://github.com/DHI/fmskill)
# Python for marine modelers using [MIKE IO](https://github.com/DHI/mikeio) and [ModelSkill](https://github.com/DHI/modelskill)

[https://dhi.github.io/book-learn-mikeio-fmskill/](https://dhi.github.io/book-learn-mikeio-fmskill/)

Expand All @@ -10,7 +10,7 @@ This course has been designed for MIKE 21/3 modelers with some Python experience

## CONTENT

The course comprises typical data processing tasks in connection with MIKE 21/3 modelling, preparing input data, converting forcing data from other formats, creating roughness maps, concatenating files, extracting data from output files (e.g. vertical profiles), visualization, assessing the skill of MIKE 21/3 by comparing to station and satellite track observation data. We will use Jupyter notebooks with Python packages such as NumPy, pandas and xarray as well DHI's own MIKE IO and FMskill packages.
The course comprises typical data processing tasks in connection with MIKE 21/3 modelling, preparing input data, converting forcing data from other formats, creating roughness maps, concatenating files, extracting data from output files (e.g. vertical profiles), visualization, assessing the skill of MIKE 21/3 by comparing to station and satellite track observation data. We will use Jupyter notebooks with Python packages such as NumPy, pandas and xarray as well DHI's own MIKE IO and ModelSkill packages.

## BENEFITS

Expand Down
2 changes: 1 addition & 1 deletion mini_book/_config.yml
@@ -1,7 +1,7 @@
# Book settings
# Learn more at https://jupyterbook.org/customize/config.html

title: Python for marine modelers using MIKE IO and FMSkill
title: Python for marine modelers using MIKE IO and ModelSkill
author: Jesper Mariegaard and Henrik Andersson
logo: images/logo.png

Expand Down
6 changes: 3 additions & 3 deletions mini_book/_toc.yml
Expand Up @@ -22,12 +22,12 @@ parts:
- file: output_visualisation
- file: output_statistics
- file: export_to_file
- caption: FMskill
- caption: ModelSkill
chapters:
- file: point_observations
sections:
- file: exercises/exercise_point_observations
- file: fmskill_model_results
- file: modelskill_model_results
- file: model_skill
sections:
- file: exercises/exercise_basic_model_skill
Expand All @@ -38,5 +38,5 @@ parts:
- file: multi_model_comparison
- caption: Assignments
chapters:
- file: exercises/fmskill_assignment
- file: exercises/modelskill_assignment
- file: exercises/final_assignment
104 changes: 52 additions & 52 deletions mini_book/exercises/exercise_basic_model_skill.ipynb
Expand Up @@ -2,156 +2,156 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exercise: Basic model skill"
],
"metadata": {}
]
},
{
"cell_type": "code",
"execution_count": null,
"source": [
"import fmskill"
],
"metadata": {},
"outputs": [],
"metadata": {}
"source": [
"import modelskill"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You want do to a simple comparison between model and observation using the fmskill.compare method, but the following code snippet doesn't work.\n",
"\n",
"Change the code below, so that it works as intended. Hint: look at the documentation\n",
"```\n",
"help(fmskill.compare)\n",
"```"
],
"metadata": {}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fn_mod = '../data/SW/ts_storm_4.dfs0'\n",
"fn_obs = '../data/SW/eur_Hm0.dfs0'\n",
"\n",
"c = fmskill.compare(fn_obs, fn_mod)"
],
"outputs": [],
"metadata": {}
"c = modelskillill.compare(fn_obs, fn_mod)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When you have fixed the above snippet, you can continue to do the skill assessment"
],
"metadata": {}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# plot a timeseries of the comparison\n",
"# * remove the default title\n",
"# * set the limits of the y axis to cover the 0-6m interval"
],
"outputs": [],
"metadata": {}
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Your colleague, who is very skilled at Excel, wants to make a plot like this one:\n",
"\n",
"![](../images/excel_chart.png)"
],
"metadata": {}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Use the .df property on the comparer object to save the obs and model timeseries as an Excel file (\"skill.xlsx\")\n",
"# you might get an error \"No module named 'openpyxl'\", the solution is to run `pip install openpyxl`"
],
"outputs": [],
"metadata": {}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# calculate the default skill metrics using the skill method\n"
],
"outputs": [],
"metadata": {}
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"# calculate the skill using the mean absolute percentage error and max error, use the metrics argument\n",
"# c.skill(metrics=[__,__])"
],
"outputs": [],
"metadata": {}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# import the hit_ratio metric from fmskill.metrics\n",
"# import the hit_ratio metric from modelskillill.metrics\n",
"# and calculate the ratio when the deviation between model and observation is less than 0.5 m\n",
"# hint: use the Observation and Model columns of the dataframe from the .df property you used above\n",
"\n",
"# is the hit ratio ~0.95 ? Does it match with your expectation based on the timeseries plot?\n",
"# what about a deviation of less than 0.1m? Pretty accurate wave model..."
],
"outputs": [],
"metadata": {}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# hit_ratio(c.df.Observation, __, a=__)"
],
"outputs": [],
"metadata": {}
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# compare the distribution of modelled and observed values, using the .hist method\n",
"# change the number of bins to 10"
],
"outputs": [],
"metadata": {}
]
}
],
"metadata": {
"orig_nbformat": 4,
"interpreter": {
"hash": "f4041ee05ab07c15354d6207e763f17a216c3f5ccf08906343c2b4fd3fa7a6fb"
},
"kernelspec": {
"display_name": "Python 3.9.6 64-bit",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.9.6",
"mimetype": "text/x-python",
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"pygments_lexer": "ipython3",
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"file_extension": ".py"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3.9.6 64-bit"
"pygments_lexer": "ipython3",
"version": "3.9.6"
},
"interpreter": {
"hash": "f4041ee05ab07c15354d6207e763f17a216c3f5ccf08906343c2b4fd3fa7a6fb"
}
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
}

0 comments on commit 9590ec5

Please sign in to comment.