The cookbook consists of a Python 3 package that contains infrastructure for indexing COSIMA model output and convenient methods for searching for and loading the data into xarray datastructures.
Some users may find it sufficient to browse through the examples and tutorials in the COSIMA recipes repository. The Jupyter notebooks that can be downloaded from COSIMA recipes need this package (called cosima_cookbook) to be installed.
Choosing your platform¶
COSIMA ocean and ice models are typically run on NCI, a HPC computing centre in Australia. The output data is very large and it is assumed that this data resides on a NCI storage system.
The cookbook is supported on two NCI systems
For both VDI and gadi scripts are used to start a jupyter notebook or jupyter lab session on the chosen system and automatically create an ssh tunnel such that the jupyter session can be opened in your local browser using a url like <http://localhost:8888> that appears to be on your own local machine.
Scripts for this purpose are provided by the CLEX CMS team in this repository
Clone the repository to your local computer. There are instructions in the repository on the requirements for each script and how to use them.
Alternatively if you are using the VDI Strudel environment and accessing the VDI through a virtual desktop you can load the same python conda environment that is used in the scripts above and start a jupyter notebook session like so:
module use /g/data3/hh5/public/modules module load conda/analysis3 jupyter notebook
Most of the infrastructure the COSIMA Cookbook provides revolves around indexing data output from COSIMA models and providing a python based API to access the data in a convenient and straight forward way.
There are graphical user interface (GUI) tools to help with data discovering and exploration. There is a tutorial in the COSIMA recipes repository which demonstrates the available tools.