When examining complex fluid flows, from either experimental or numerical data, it can often be instructive to plot (and if time resolved animate) contours of a given quantity and an overlay of the instantaneous velocity vectors. Ultimately the choice of the scalar (velocity or vorticity component, pressure, temperature) you wish to use as a contour and the field you wish to display as a vector (velocity, vorticity) will depend on the data you have available and the relationship you wish to explore. For this particular example I have particle image velocimetry (PIV) based time-resolved experimental measurements of a turbulent boundary layer. These are planar measurements in the streamwise and wall-normal direction (x-y), consisting of 6304 instantaneous velocity fields of 230 x 123 in-plane velocity vectors, recorded at a rate of 1250 Hz in the Laboratory for Turbulence Research in Aerospace and Combustion (LTRAC) horizontal water tunnel at Monash University, Australia. In order to emphasise the turbulent fluctuations the mean velocity has been subtracted from the instantaneous velocity field, such that the velocity vectors represent the fluctuating velocity (*u’ = u – U*) and the iso-contours represent the fluctuating streamwise velocity component (*u’*).

In the past I tended to use custom C++ codes for post-processing data and Gnuplot for line and vectors plots. While Gnuplot can produce excellent plots, the workflow required to output the data in a multi-dimensional ASCII format and then read it into Gnuplot can occasionally be quite cumbersome. This is particularly true when exploring and comparing data from different sources and file formats. The Python programming language, combined with libraries such as Matplotlib for plotting and Scipy for data input, can noticeable simplify this process. Python can also be easily installed on multiple operating system without having to play with multiple configure and make files for different C++ data processing libraries and different compilers on different computers. This is particularly handy for my own workflow of Mac laptop to Linux desktop and Linux clusters.

The data I use tends to be stored in either NetCDF or HDF5 binary data formats, which can be easily shared between different computer architectures and read with Python, C++ or Fortran programming languages. The following Python (v2.7) script contains a function that uses the Scipy library to read the velocity fields from a NetCDF file and returns the input data as Numpy arrays:

import numpy as np from scipy.io import netcdf # load velocity field from netcdf file # function expects input filenames in the form velocity_0001.nc # prefix is input data filename prefix e.g. 'velocity_' # number is the current image file number e.g. 1 # digits is the number of digit in the filename number e.g. 4 # ext is the image filename extension e.g. '.nc' # coord1,2,3 are the input coordinate variable names in the input file # vel1,2,3 are the input velocity component variable names in the input file # flag is the input velocity data flag # use coord or vel = 'na' if a particular dimension or variable does not exist def load_velocity(prefix,number,digits,ext,coord1,coord2,coord3,vel1,vel2,vel3,flag): if digits == 4: num = "%04d" % number if digits == 6: num = "%06d" % number file = prefix+num+ext print 'loading velocity field: ', file # Open netcdf file if ext == '.nc': ncfile = netcdf.netcdf_file(file, 'r') # read axes if coord1 != 'na': x = ncfile.variables[coord1] else: x = np.zeros([1]) if coord2 != 'na': y = ncfile.variables[coord2] else: y = np.zeros([1]) if coord3 != 'na': z = ncfile.variables[coord3] else: z = np.zeros([1]) # read velocity if vel1 != 'na': u = ncfile.variables[vel1] else: u = np.zeros([z.shape[0],y.shape[0],x.shape[0]]) if vel2 != 'na': v = ncfile.variables[vel2] else: v = np.zeros([z.shape[0],y.shape[0],x.shape[0]]) if vel3 != 'na': w = ncfile.variables[vel3] else: w = np.zeros([z.shape[0],y.shape[0],x.shape[0]]) # change type and convert to numpy arrays unp = np.asarray(u[:],np.float32) vnp = np.asarray(v[:],np.float32) wnp = np.asarray(w[:],np.float32) xnp = np.asarray(x[:],np.float32) ynp = np.asarray(y[:],np.float32) znp = np.asarray(z[:],np.float32) # read flag if flag != 'na': f = ncfile.variables[flag] else: f = np.zeros([z.shape[0],y.shape[0],x.shape[0]]) fnp = np.asarray(f[:]*1.,np.float32) ncfile.close() return xnp,ynp,znp,unp,vnp,wnp,fnp

The above function can be used to load a three-dimensional velocity field with two velocity components as follows:

#input parameters velprefix = 'merge_xtfilt_' digits = 4 fnum = 1 ext = '.nc' # variable names coord1 = 'x' coord2 = 'y' coord3 = 't' vel1 = 'dx' vel2 = 'dy' vel3 = 'na' flag = 'flag' # load velocity field (in this case file is in time,y,x) x,y,t,dx,dy,dz,vecflag = load_velocity(velprefix,fnum,digits,ext,coord1,coord2,coord3,vel1,vel2,vel3,flag)

In this case the third coordinate correspond to *time* with each two-dimensional plane of the file representing the velocity field at a different time. The average velocity (assuming homogeneous flow in the streamwise direction) can therefore be determined by averaging the velocity field in the 1st and 3rd axes of the input data array.

# compute u,v mean profiles um = np.zeros(u.shape[1]) vm = np.zeros(v.shape[1]) # loop over each wall-normal or y-position for j in range(0,u.shape[1]): um[j] = np.average(u[:,j,:]) vm[j] = np.average(v[:,j,:])

In order to generate an animation of the velocity field it is necessary to first generate the desired plots for the velocity field at each time-step. The following script loops through each time-step, generating and saving contour and vector plots as shown at the top of the post.

nsteps = 6304 # number of time-steps to plot imageprefix = "velocity_" # consider each time-step for nt in range (1,nsteps): # compute velocity fluctuation uf = np.copy(u[nt,:,:]) vf = np.copy(v[nt,:,:]) # subtract mean velocity at each y position for j in range(0,u.shape[1]): uf[j,:] -= um[j] vf[j,:] -= vm[j] # set figure size in inches fig = plt.figure(1,figsize=(9.,3.5)) plt.clf() ax = fig.add_subplot(111, aspect='equal') # set contour levels levels = np.arange(-5.2,5.2,0.2) plt.contourf(x,y,uf[:,:],levels) cbar = plt.colorbar() cbar.set_label('$u\'^+$', fontsize=18) # plot vectors of fluctuating velocity (every 2nd vector) plt.quiver(x[::2],y[::2],uf[::2,::2],vf[::2,::2],units='xy',angles='xy',scale=0.04,width=3, headwidth=5,headlength=8,minlength=0.1) # set plot limits and axis titles plt.ylim([20,1200]) plt.xlabel('$x^+$', fontsize=18) plt.ylabel('$y^+$',fontsize=18) # save plot to image file at a resolution of 300 dpi plt.savefig("%s%04d.png" % (imageprefix,nt),dpi=300)

Using FFmpeg any series of images can be be easily converted to a video and compressed using a range of different video compression codecs. I won’t go into the various options for FFmpeg and I’m sure it is possible to use different libraries in Python to generate videos, however the method I’ve used involves running FFmpeg from the terminal using:

ffmpeg -i velocity_%04d.png -c:v libx264 -r 15 -pix_fmt yuv420p video.mp4

where *-r 15* corresponds to an output frame rate of 15 input images per second. A sample video consisting of 1000 velocity fields is shown below for a zero pressure gradient turbulent boundary layer with a momentum thickness based Reynolds number of \(Re_\theta = 2250\).

This is only one way of plotting and animating numerical or experimental data. The code and process above can easily be adapted to different input data and various styles of plots. Hopefully this process works as well for you as it does for me.