Me, blogging!

thoughts, science, code, jobs, and thoughts

Darwin Lecture: Animal Migration

Today I watched a lecture I can highly recommend if you're interested in social behaviour or swarm intelligence, etc.:

Tags: talk 2018/06/22
Net Primary Productivity and the Miami Model

I haven't been posting for a while as I now realise. So here a new post about a fairly simple and/but old model to calculate the net primary productivty (NPP), the so called Miami Model (Lieth, 1973):

\[ NPP(T) = 3000(1+\exp(1.315-0.119 T)) \\ NPP(P) = 3000 (1-\exp(-0.000664 P)) \\ NPP = \min(NPP(T),NPP(P)) \]

After reading through the more recent literature, it turns out that the Miami-Model is still regarded as useful proxy for annual NPP. Global NPP observations are available, for example the Terra/MODIS Net Primary Production Yearly L4 Global 1km. But these are currently not available as global data set. Instead you can download all tiles (i.e., chunks of 1km by 1km tiles). I have no idea why, but apparently, nobody is interested in compiling it from the available tile data. (Maybe a topic for another blog post?). Anyways, here is what the annual NPP looks like for a present-day climatology, i.e. the Ten Minute Climatology.

Annual Net Primary Productivity from 10' climatology temperature and precipitation

Here is a simple Python implementation:

import numpy as np
def miami_model(temp,prec,a=3000.,b=1.315,c=0.119,d=0.000664):
    npp_temp = a/(1.+np.exp(b-c*temp))
    npp_prec = a*(1.-np.exp(-d*prec))
    return np.fmin(npp_temp,npp_prec)
Tags: models 2017/11/20
Impressions with cookiecutter for reproducible science

I managed to be strict about reproducible science. And the paper (SEMIC: an efficient surface energy and mass balance model applied to the Greenland ice sheet) I was working on with my co-authors is the first example that reproducible science is fun and I will definitevly keep going in that direction. The paper did take much longer than I previously anticipated (at home it was colloquially dubbed Rohrkrepierer) but our Open-Science approach is not to blame.

So we can check one item of the previously mentioned bullet points: cookiecutter for reproducible science serves a good purpose. Since then, literally all of my projects have started with this cookiecutter, even the smallest one. My project folder is now fully organized, each project has its and I know where to look up for file. Some of my projects "in production" are easily converted into version-controlled projects by simply typing

git init .

I very much recommend it. Happy cookie cutting!

Tags: reproducibility, github 2017/05/16
Next topics

My first 6 months at my new institute are over and I've been working on a couple of diverse topics lately. So, I'm taking the opportunity to share some things I've learnt during that time, and, in order to remind myself to post stuff more frequently I'll start out with a list of upcoming topics:

  • GIS tools: Manipulation of different raster and vector files; Transformation between raster and vector files; Conversion between different cartographic projections; Powerfull command line tools like ogr2ogr and the gdal suite
  • ☐ Insolation changes over the course of the last thousands–millions of years
  • f2py and the pretty awesome f90wrap tool for derived types
  • ☑ First impressions from using cookiecutter for reproducible science (Update: See next post)

Just in case you need to now, the HTML symbol for the unchecked checkbox ☐ is ☐; the symbol for the checked checkbox ☑ is ☑.

Tags: 2016/12/07
Cookiecutter for a more transparent and reproducible science

Last week I watched an inspiring tutorial ("Data Science is Software") from the SciPy 2016. And this reminded my about the debate how reproducible science really is. It seems that the majority of scientists agree that there is a reproducibility crisis.

That is indeed frustrating, also because research itself should be transparent. I also understand why that is the case from my own experiences: Imagine you did some fine research and you've saved your results for a later publication. The results themselves have been hard to obtain, either beacuse of extensive data analysis, lenghty computer simultions or scraping data from a larger database. Later that year you realised that some of the results are corrupt or you did a mistake in the analysis or you had a wrong configuration. In this case you try to reproduce your own results. However, this is a tedious task, so you go back to the beginning a try to repeat each single step to get to your final result. Fine! But wait. Why not simply automating this process from the beginning? Yeah, sure, but how?


So I adopted the philosophy of the guys from the video above, which states that in order to deliver reproducible (data) science you need "A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.", and developed a similar cookiecutter project template aimed for scientists.

You can find the source on Github and simply install it for your next science project:

cookiecutter gh:mkrapp/cookiecutter-reproducible-science

This tool provides you the basic structure of your research project. You can adjust it to your needsand need to fill it with your reproducible workflow. You can add a customized Makefile, add you preferred command line tools, scripts, or model source code. You can also add raw data and processed data. You can document and also write you scientific paper within this structure.

Let's do more reproducible and transparent science!!!

PS: I'm also currently adapting one of my science projects and I'm going to provide the final reproducible version later on (I need to finish, first).

Tags: design, productivity, reproducibility, github 2016/07/25
Live from the EGU General Assembly 2016 (press conference)

I haven't been to the EGU since 2013 so I thought it is worth going there this year. Therefore, I submitted an abstract about Historical Responsibility for Climate Change – from countries emissions to contribution to temperature increase and was accepted for a talk. To my surprise and astonishment I was also invited for a press conference, which I happily accepted. All good! All good? Far from it. Unfortunate circumstances (funding, as always) left me unemployed after my fixed-term contract officially ended in March and couldn't be extended for this year. A stupid, sad, and unfair situtation I wish nobody has to experience in any job (or job ending). The good news is, that after some discussions, me and my former employer agreed that it may be best if I still attend to the EGU General Assembly because I earned it, because our hard work on this topic deserves it, and also, because I was having the chance to be on a panel for a press conference (how great is that???). The guy to the left in the video below, that's me talking about climate science, climate policy, and responsibility.

I'm very grateful to the EGU for inviting me to a press conference, I very much enjoyed the contribution form the other great panelists presenting their work related to climate change and the Paris Agreement (~45min in total). And, of course, I'm also grateful for Climate Analytics, my former employer 1, for letting me be part of the EGU General Assembly 2016. I've had a lot of fun, I've met quite many old friends and colleagues and I enjoyed spring time in Vienna.

1. Thank you guys for many wonderful experiences and the great time I had with you.

Tags: conference, media 2016/04/29
Building and Running CESM

Recently I applied for a job where one requirement is to have experience with the Community Earth System Model (CESM). Therefore, this post is about how to build and run the publicly available CESM under Mac OS X. It is fairly straightforward but you need to pay some attention because Earth System Models are naturally complex and complicated software frameworks. Here are the steps to run CESM on your Mac.

  1. Register at the CESM website to obtain credentials for the source code download.
  2. Checkout the latest release (or whatever version you prefer) from the CESM SVN server
  3. Prepare and build a custom setup following the instructions by Walter Hannah
  4. Run the CESM model

Checkout CESM source code from SVN repository

svn co --username guestuser cesm1_2_2

Configure your machine, compiler, and library settings

You have to tell the CESM setup framework that you would like to use the GNU compiler, the NetCDF and the Parallel-NetCDF libraries for your "undefined" Mac OS X machine. Therefore, edit ccsm_utils/Machines/config_compilers.xml:

<compiler MACH="userdefined">
  <ADD_FFLAGS> -fno-range-check -fcray-pointer -arch x86_64 </ADD_FFLAGS>
  <ADD_SLIBS>$(shell $(NETCDF_PATH)/bin/nc-config --flibs)</ADD_SLIBS>  

Create a CESM setup

Now you are ready to create a new CESM setup I choose to run an Aquaplanet simulation in CAM5 hoping that this will not take too long to complete. In cesm1_2_2/scripts type

./create_newcase -case <your case> -res T31_g37 -compset 2000_CAM5_SLND_SICE_AQUAP_SROF_SGLC_SWAV -mach userdefined 

Other configurations, i.e., the compset, can be found on the CESM website. Next, adjust the XML configuration files in cesm1_2_2/scripts/<your_case> where <your_case> is the name of the CESM setup you created before

./xmlchange -file env_build.xml -id GMAKE_J -val 8
./xmlchange -file env_build.xml -id GMAKE -val make
./xmlchange -file env_build.xml -id OS -val darwin 
./xmlchange -file env_build.xml -id MPILIB -val mpich 
./xmlchange -file env_build.xml -id COMPILER -val gnu
./xmlchange -file env_build.xml -id CESMSCRATCHROOT -val ~/Projects/cesm1_2_2
./xmlchange -file env_build.xml -id EXEROOT -val ~/Projects/cesm/my_model/bld

and for the build environment

mkdir -p ~/Projects/cesm/my_model
mkdir -p ~/Projects/cesm/input
./xmlchange -file env_run.xml -id RUNDIR -val ~/Projects/cesm/my_model/run
./xmlchange -file env_run.xml -id DIN_LOC_ROOT -val ~/Projects/cesm/input

and to change the default number (64) of used CPUs to 2:

./xmlchange -file env_mach_pes.xml -id MAX_TASKS_PER_NODE -val 1
./xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 2    
./xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 2    
./xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 2    
./xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 2    
./xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 2    
./xmlchange -file env_mach_pes.xml -id NTASKS_GLC -val 2    
./xmlchange -file env_mach_pes.xml -id NTASKS_ROF -val 2    
./xmlchange -file env_mach_pes.xml -id NTASKS_WAV -val 2    
./xmlchange -file env_mach_pes.xml -id TOTALPES -val 2

These changes are processed via ./cesm_setup and now you can start the build process in <your case>/

./<your case>.build

Run your CESM model

Finally, you need to uncomment one of those two lines in <your case>.run

#mpiexec -n 2 $EXEROOT/cesm.exe >&! cesm.log.$LID
#mpirun -np 2 $EXEROOT/cesm.exe >&! cesm.log.$LID

Now, you can run your CESM setup on your Mac OS X

./<your case>.run

and, hopefully, after a while your console prints out something like this:

 - To prestage restarts, untar a restart.tar file into ~/Projects/cesm/my_model/run
 infile is ~/Projects/cesm1_2_2/scripts/<your case>/Buildconf/cplconf/cesm_namelist 
CAM writing dry deposition namelist to drv_flds_in 
CAM writing namelist to atm_in 
 - Case input data directory, DIN_LOC_ROOT, is ~/Projects/cesm/input
 - Checking the existence of input datasets in DIN_LOC_ROOT
Thu Mar 24 12:02:31 CET 2016 -- CSM EXECUTION BEGINS HERE
Thu Mar 24 12:24:58 CET 2016 -- CSM EXECUTION HAS FINISHED
(seq_mct_drv): ===============       SUCCESSFUL TERMINATION OF CPL7-CCSM ===============

Here's the zonal mean of the air temperature. I didn't pay too much attention to the axis descriptions but I hope you acknowledge this as a result ;)

Zonal mean of air temperature

Tips and Tricks and Errors

  • To make the parallel-netcdf library available to everybody using homebrew for Mac OS X I already created a PR on Github.
  • During the SVN checkout the MCT directory in models/utils was not updated. You can do so manually by entering git clone mct in models/utils.
  • Replace isnanf with isnan in shr_isnan.c as described here; other potential errors during that process are listed, explained and resolved here.
Tags: models, code, CESM 2016/03/24
Optimization problem solving with Pyomo

Next on this blog: An example about the optimization modeling language Pyomo.

Tags: python, pyomo, optimization, models 2016/03/03
Exceedance probability curves

While working on a project proposal to estimate drought risks of some countries in the coming years, I learned something about risk analysis: It's rather simple! But, you need to be cautious if you want to use this information for guessing what might happen in the future. In the context of anthropogenic climate change that usually means that you can't rely on exceedance probabilities from observed (and hence past) data. Because the underlying climate is changing. And even if that's not the case, your record may be over- or underestimating the occurrences of specific events, just by pure chance.

However, let me show you how to quickly derive exceedance probabilities from some river discharge data.

From the monthly discharge data, we estimate the cumulative distribution function (CDF) by assigning each sorted monthluy value a number starting from 0 for the lowest up to 1 for the highest monthly discharge value. The exceedance probability is simply 1 - CDF. Plotting the exceedance probability against the sorted monthly data, et voilà, your exceedance probability curve.

Exceedance Probability Curve

Here's the link to the CSV file elbe.csv and the Python code to process the data.

import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy import stats
import sys
import matplotlib
from matplotlib.ticker import ScalarFormatter, FormatStrFormatter, FixedLocator'bmh')

fnm  = 'elbe.csv'
df = pd.read_csv(fnm,index_col=0,infer_datetime_format=True,parse_dates=True)
raw = df['Elbe']
ser = raw.sort_values()

X = np.linspace(0.,max(ser.values)*1.1,100.)
s, loc, scale =

cum_dist = np.linspace(0.,1.,len(ser))
ser_cdf = pd.Series(cum_dist, index=ser)
ep = 1. - ser_cdf

fig = plt.figure(figsize=(11,8))

ax1 = plt.subplot(221)
ax1.plot(X,stats.lognorm.cdf(X, s, loc, scale),label='lognormal')

ax2 = plt.subplot(223)
ax2.hist(ser.values, bins=21, normed=True, label='data')
ax2.plot(X,stats.lognorm.pdf(X, s, loc, scale),label='lognormal')
ax2.set_ylabel('Probability Density')

A = np.vstack([ep.values, np.ones(len(ep.values))]).T
[m, c], resid = np.linalg.lstsq(A, ser.values)[:2]
print m, c
r2 = 1 - resid / (len(ser.values) * np.var(ser.values))
print r2

ax3 = plt.subplot(222)
ax3.plot(100.*(1.-stats.lognorm.cdf(X, s, loc, scale)),X,label='lognormal')
minorLocator = FixedLocator([1,2,5,10,20,50,100])
ax3.set_xlabel('Exceedance Probability (%)')

ax4 = plt.subplot(224)

plt.suptitle('Elbe discharge',y=1.01,fontsize=14)


Tags: statistics, python, code, models 2016/02/20
Keeping (time) track of different projects

I work for different projects. And it is not easy to keep track of the time spent on these different projects. There are, of course, tools which handle time tracking for you. But they are either for freelancers, too complicated, or expensive. Here's an alternative: <code>go-watch</code>.

I've tried <code>go</code> because the compiled executables can be deployed everywhere. Try that, Python! In principle you would just need a compiled version for your operating system but here's the clue: Try to install it on your own:

All you need is a <code>go</code> installation and a clone of the go-watch project:

git clone git://

Screenshot of go-watch

Tags: productivity, go, code 2015/10/06
Finish it

Finishing a project is tough. It takes a lot of time. Always more resources than previously anticipated. And of course it is stressful, even briefly before the deadline. And you really start to grasp the meaning of that word: "deadline". There is no afterwards, there is no: Let's do it again! It is the one chance you have been given and you need to take it.

And again, I learned quite a lot abut the last project (cannot tell you more about its content because it's classified, but not in a military or conspiracy sense ;) ). The most important thing again: communication. In many ways it helps to discuss things over and over again. Because there is no time to clarify things afterwards. It doesn't really help to have the aha! moment right after you submit your final report.

Communication is the essential thing for getting things done. It helps to start over, re-think your approach and, of course, to avoid misconception. Because the one simple thing is not necessarily the most obvious thing your counterpart.

The second most important thing: specification. Once you know what exactly to do, it may be easier to follow specific guidelines or requirements. In the best case, those specifications should not change. Specifications help to set mile stones, which in turn help to meet required deadlines.

Last but not least: revision. And revisions take time. So don't start revising your work to late.

The above suggestions do not guarantee a project to be successful but for me they help to structure the project both in terms of time and in terms of resources.

Tags: job 2015/09/18
My new job

That one went fast. And I hadn't seen that job coming, neither. But I've now found a new job at an organization supporting Least Developed Countries and Small Island Developing States in their efforts during climate negotiations. Here's the link to their website. And here's the good thing about it: I can go there by bike! That was (nearly) impossible at my former institute in Potsdam (though I did take the bike to get to and from the train station(s)).

For this reason, I had to change the title of this blog and I leave it open to new suggestions about what to write next. I'm a little bit sad just having 8 blog entries. Let me just think, maybe I come up with something great, next time...

Tags: job 2015/02/20
To set up an email server

Today, I set up an email server on my raspberrypi. Thanks to the tutorial by Sam Hobbs! And it works. If you would like to give it a try, just drop me a mail.

Here's what you need:

  • Postfix as your mail transfer agent
  • Dovecot as your IMAP server
  • A (trusted) certificate for encryption (SSL and the like)

In principle, you could also add a spam filter and things like that. I skipped that for this reason. Because my domain has no static IP address (Dynamic DNS), sent mails are often regarded as spam. Hence, this is just an exercise in server administration.

As an add-on I also installed Roundcube as web-mail application.

Tags: server, administration, email 2015/01/20
More Applications

My numbers of applications grew to 5 last week. And on Monday, I got my first job interview! Only a few days after I've sent my application file. I really enjoyed the interview, also because I felt quite comfortable with the people talking to me. The discussion was vivid and the described tasks were very interesting (Let's keep our fingers crossed!).

Yesterday, I also had a meeting with the employment agency. My supervisor there was open to my requests and wishes and I felt welcome which is not necessarily true. Especially here in Germany. As he explained, all employees at the agency are very, very busy, and they have to take care of as much as 300-400 customers. That is a huge work load for them.

Now, I'm looking for new job ads, again.

Tags: job 2015/01/15
New old project

spiral wave A few days ago, I uploaded a very old (almost ancient) code project to Github. It is a relic of my studies at the university (2007!) and I have used this code for my diploma thesis. After tweaking the code quite a bit, I managed to get it running and spitting out this nice spiral wave (which makes this blog look a lot more colorful than originally I intended).

From the entire Java files I produced back than I included only the one that have a good purpose (like the one above). I also had to add a build file for ant because originally, this project was developed in Eclipse. I am quite happy to see the old stuff still working.

Please, go and try yourself:

git clone

and then

ant [clean,compile,jar,run]

To produce the actual spiral wave you have to process the created data files. That part is not included. If you like to see how this works, drop me an email.

Tags: github, code, java, diploma thesis, models 2015/01/08
Happy 2015

Happy new year, everyone. The last post was quite a while ago. Therefore, my new year's resolution is to add posts on a regular basis. Just in case someone is reading all this1. Good luck to everybody. More interesting things and an update about my job search coming soon.

Update (2015/01/05):

I already submitted three applications (For privacy reasons I won't tell to whom the applications went). I'm still struggling for the right search terms when looking up the job search websites like stepstone, greenjobs, etc. I think it has to do with myself struggling to nail down my skills and to focus on what is important to me, at least for a new engagement. However, instead of me giving a detailed description of what I would like to do next, I broaden search patterns, instead and seek through the much more general results.

Not counting the days between Christmas and New Year's Eve, about 20-30 new job offers hit my mailbox. If one or 2 of these hits match my idea for a new job or even better show me a new idea, my goals is fulfilled. Next:

  • Prepare a motivation letter!
  • Apply!
  • Wait for response!

The latter being the worst part, though I really like the first two steps!

1. Hello Romy

Tags: resolution 2015/01/03
To design a Website

I really have had to make up my mind to come up with a simple and sleek website. It took me several days to think how my personal homepage should look like. Although, nowadays, every institute's home page provides a staff list where (usually) every staff member is asked to upload some content—see my personal PIK page, for example—but this kind of design and content is not what I was looking for. And I love it simple!

So, I decided that my personal website should have a simple layout, at least the entry page. Therefore, I chose to use tabs that, while popping up, show what the site is about. It doesn't, by the way, as of now! Who I am, showing a short CV, and showing what I'm interested in. The main entry page should also link to this blog which it does right now.

So far, so good. To allow some interactive behavior I included CSS overlays that pop up while clicking on the respective tab. For a corporate website, I also use the same font Josefin Slab and EB Garamond which you can get from Google Fonts here and here. I like the style of that fonts; especially Josefin Slab reminds me of the 20's or 30's. And EB Garamond is quite good to read as well.

Tags: design, html, css 2014/12/08
New website now online

I've read a many times about the benefits of having your own web page or at least an online resume (for example, here or here). The reason is that when potential employers start looking up your name at Google (or Altavista) they are likely to find something about you. But why not helping your future-to-be employer and providing all essential information about you on your own web page? There, you can show how credible you are, what you like to do and which skills you have, and projects you are working on. For this reason (to be honest, I always wanted to have my own server with my own domain), I obtained the domain last week.

After setting up an SSL certificate for encryption—"Do you copy, NSA?"—using StartSSL™ Free, I had to rework the main web page first. By default, it was index.lighttpd.html, provided by LIGHTTPD, the light-weight web server that is running on my RaspberryPi.

I wanted to keep the web page as simple as possible to get a nice overview of the content. Being a fan of simplicity I also liked the simple designs of the social media buttons that I use (also for the blog engine). Thanks, Dan Leech! For the web page navigation buttons I chose to stick to CSS.

By now, only Blog has some real content, namely this blog. I will update the About and CV sections soon but I'm still looking for a simple way to show off the one-page content of each of these two.

BTW, as of this week you can also follow me on Twitter!

Tags: html, css, ssl 2014/12/01
Building a website generator

Hello everyone. This is my first posting on my blog where I want to provide some useful information about my job search. I started of as a physicist (TU-Berlin) about six years ago. Afterwards I went to Hamburg to do my graduate studies at the Hamburg University and the Max Planck Institute for Meteorology. After finishing my Ph.D. I went back to Berlin to work at the Potsdam Institute for Climate Impact Research as a postdoctoral researcher. My project about the melting of the Greenland Icesheet on different time scales ended two weeks ago. Now I'm taking my time to recollect what I've done so far and what I want to do next. This blog should help me and others to keep track about the expected and unexpected things to come, difficulties and the joy and pain of finding a new job outside of academia.

As of this week, I started writing a static blog engine in Python (pystable), which can be found on Github and cloned via

git clone

Actually, this very web page is based on pystable.

Tags: pystable, code, python, introduction 2014/11/28
The Ice Sheet model SICOPOLIS

Recently, I started working with SICOPOLIS, a three-dimensional dynamic/thermodynamic ice sheet model. Download and usage are fairly easy. As long as you want to use the predefined experiments. Although setting up an alternative configuration is not complicated, using you own forcing data is. Nevertheless, the code is well written, the output format is standard netCDF which can be easily processed with—guess what—python and matplotlib.

Here is one example.

Initially, Greenland has no land ice. Under current boundary conditions, i.e. present-day surface temperature and precipitation rates, the ice sheet starts to grow (left figure). After nine thousand years, large parts of southern and central Greenland are ice-covered, mainly because of the large snowfall compared to northern Greenland (right figure).

And I'm amazed. Not because of the results, they seem straightforward. I'm amazed because of the time scales tackled here. Within minutes (literally!), an evolution of a whole ice sheet, which would take many thousands of years, is at my feet. And now I'm becoming aware of the work that lies ahead of me.

In the past 12 months, I was working more or less on the climatology of atmospheric fields like air pressure, air temperature and various radiative heat fluxes. But this was just tens of years. Here, we are talking about time scales 2–3 orders of magnitude larger than this. What an eye-opener. The real question I need to answer is:

How does the seasonal melt cycle affect the large-scale evolution of the Greenland Ice Sheet?

Tags: ice sheet 2013/09/04
f2py in action

Recently, I heavily made use of <code>f2py</code>—a tool that converts your FORTRAN code into a Python module. Here is how to use it:

$ f2py -c -m module_name fortran_code.f90

In the current directory there is now a shared object of your module To use it in Python, just do the usual import. A docstring is generated for you as well, so you know how to call the methods, i.e., FORTRAN subroutines or functions

import module_name as mymodule
print mymodule.__doc__

Let's suppose you've written a subroutine that does some calculations that are faster in FORTRAN than in Python

! fortran_code.f90
subroutine some_calulations(a,b,c)
  implicit none
  real, intent(in) :: a, b
  real, intent(out) :: c
  real :: some_result

  ! start some calculations
  c = some_result
end subroutine

To call that subroutine, our Python script has to look like this

import module_name as mymodule
print mymodule.__doc__

x1 = <some value>
x2 = <also some value>

result = mymodule.some_calculations(x1,x2)

That's it. Pretty easy, huh? Now you can use your fast code and let Python do the rest, like making a proper data analysis using numpy or scipy, or doing the graphics using matplotlib.

Tags: f2py, fortran, python, tools 2013/08/02
How to calculate the heat flux within a snowpack

It sounds simple but requires at least some basic knowledge of partial differential equations. You also need to know about discretization so that you can solve the equation

\[ cp \rhow\frac{\partial T}{\partial t} = K \frac{\partial^2 T}{\partial x^2} \]

on your computer. Most commonly used are finite difference schemes, for example, explicit (forward time stepping), implicit (backward time stepping), or the Crank-Nicolson (both, forward and backward time stepping) schemes.


Here you see an example of a vertical temperature profile of a snowpack of 5m thickness. On top (left, 0m) the model is forced with 10 years of surface temperature at location of the JAR automatic weather station as modelled by MAR. You can see the alternating summer and winter heat and cold waves entering the snowpack. The annual cycle is the strongest because a slower surface signal (i.e., longer time scales) lead to a larger penetration depth \(l\) \[ l = \sqrt{2\,\frac{K}{cp \rhow} \Delta t} \]

Because of the (almost) periodic forcing the signal at depth is phase shifted and decays exponentially.

Tags: snow model 2013/03/25

Ever longed for a simple way to type a mathematical expression in HTML just to realize there is no proper way for doing that? There is: MathJax (Thanks to JavaScript). All you have to do is to insert the following sniplet into the head of your html file.

<script type="text/javascript"

Here's an example of the Lorenz attractor (taken from the MathJax Demo page): \begin{align} \dot{x} &= \sigma(y-x) \\ \dot{y} &= \rho x - y - xz \\ \dot{z} &= -\beta z + xy \end{align}

The source:

\dot{x} &= \sigma(y-x) \\
\dot{y} &= \rho x - y - xz \\
\dot{z} &= -\beta z + xy

You probably have to escape the two backslash characters \\ (the newline command). Works also for inline expressions like this \(e^{\imath \pi} + 1 = 0\).

Tags: tools, latex 2012/10/18
About Mario Krapp
I'm a physicist by training and graduated in Earth System Science.

And I like coding. I've been working with complex computer models ever since my undergrad and I enjoy data exploration and data analysis to gain insights into the underlying principles.

Feel free to contact me.