Easy version control for single files: RCS

Version control is a wonderful invention: it allows you to keep track of what has changed in a file (or directory hierarchy) between each revision.  When you have made some changes you “check in” (or “commit”) and leave a quick log message.  You can then later view the entire history of that file, or look at the differences between any two versions, or check out any of the previous versions.

Sometimes people do not want to use version control because they feel that setting up a Mercurial or Git repository is a chore and it’s not worth the effort for a single file.  This concern is mostly valid, though I would say that the problem is not really that it’s a single file but rather that a directory might have many files that you do not want to control together.  We will use RCS to show lightweight version control on single files.  RCS is in all the GNU/Linux distributions ("sudo apt install rcs" on debian, "sudo yum install rcs" on redhat-based systems).

Example: on GNU/Linux and UNIX systems (and others) people who use the shell bash put various commands to be run at startup in a file called .bashrc – these are for the most part commands to set environment variables.

Here is how you could start controlling your .bashrc with a single command. I just did this in my home directory (try it!):
$ cd ~
$ ci -l .bashrc
.bashrc,v
>> initialization file for bash
>> .
initial revision: 1.1
done
$

Now you can see what that ci -l .bashrc has done:

$ ls -l .bashrc*
-rw-r--r-- 1 markgalassi markgalassi 5789 Feb 8 2016 .bashrc
-r--r--r-- 1 markgalassi markgalassi 6038 Jun 28 10:31 .bashrc,v
$

There is a new file with a ,v extension: .bashrc,v, and the original .bashrc file is still there.  You will never do anything with the .bashrc,v file, but remember that it keeps the historical information.  Let us make a change to .bashrc, for example: I like to keep my aliases and shell functions in a separate file called .bash_aliases, so I will use a text editor to add this line toward the top of my .bashrc:

. $HOME/.bash_aliases

(note that you will want to make yourself a .bash_aliases file, either empty or with a few aliases or shell functions — I’ll give you my favorite one here:

function lst()
{
ls -lashtg ${1:-.} | head -13
}

which I use virtually every time time I change into a directory)

Now I want to check in the new version of .bashrc:
$ ci -l .bashrc
.bashrc,v

>> Added a line to source my .bash_aliases file
>> .
done
$

So this is the workflow: use "ci -l filename" every time you make some changes to that file. Nothing else!

So what’s the point? Let’s say that I want to look at what changed between versions. First I will look at a log of all changes:
$ rcs log .bashrc
[...]
revision 1.2 locked by: markgalassi;
date: 2017/06/28 16:52:59; author: markgalassi; state: Exp; lines: +2 -0
Added a line to source my .bash_aliases file
----------------------------
revision 1.1
date: 2017/06/28 16:47:46; author: markgalassi; state: Exp;
Initial revision
[...]
$

This tells me that I have versions 1.1 and 1.2. To look at the differences between them I can type:
$ rcs diff -u -r1.1 -r1.2 .bashrc
===================================================================
RCS file: .bashrc,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -r1.1 -r1.2
--- .bashrc 2017/06/28 16:47:46 1.1
+++ .bashrc 2017/06/28 16:52:59 1.2
@@ -1,5 +1,7 @@
# Mark Galassi's .bashrc


+. $HOME/.bash_aliases
+
DIO_LOCAL=$HOME/DLOCAL
export PATH=$DIO_LOCAL/bin:$PATH
export LIBRARY_PATH=$DIO_LOCAL/lib:$DIO_LOCAL/lib64:$LIBRARY_PATH
$

This is a context diff which shows a bit of the context around the change in the file, and marks with a + (plus sign) the lines that were added (and it would have shown with a - minus sign any lines that were removed).

Some notes about the example I just gave and when to use RCS:

  • If you have not yet used version control, start right now!  You should always use version control on files you change.
  • This was a single file in a cluttered directory: your home directory will have so many files and subdirectories that are not tightly related, so using Mercurial or Git would be awkward: they try to own the whole directory.  That’s why RCS works well here.
  • People have tried to create systems that use Mercurial or Git to control the various “dot” files in your home directory (.bashrc is not the only one – many programs have configuration files in your home directory that start with a dot). These schemes sometimes involve having a separate directory under version control and then using symbolic links to make those files appear in your home directory.  I found it a bit too cumbersome, so I have taken the approach of using RCS for many of the single “dot” files in my home directory, and using Mercurial for those programs which use a whole subdirectory for configuration, such as emacs which uses a .emacs.d directory.

There is much more that can be discussed and demonstrated, but I will point to existing tutorials for that. My purpose here was to get you going with version control on a single file.

Advertisements
Posted in programmingforresearch, sysadmin, try this, unix, version-control | Leave a comment

aspiring hacker’s reading list

Sometimes a random brief post has you spending some time marshaling your thoughts on a given subject.  Someone on slashdot asked about a reading list for an aspiring “coder” (whatever that is; hacker seems more suited).  I put out some thoughts, then I saw that slashdot munged the newlines, so I’m reproducing them here and adding a bit.

I have occasionally contributed to other reading lists – here is a non-software-engineering-related post on Quora.

 

Any comments or suggestions for more books for aspiring hackers?

nonfiction broad-interest

Steven Levy: Hackers

Tracy Kidder: The Soul of a New Machine

Douglas Hofstadter: Gödel, Escher, Bach

Cristopher Moore and Stephan Mertens: The Nature of Computation

fiction/fun

Neal Stephenson: Reamde (note the spelling)

Geoffrey James: The Tao of Programming

nonfiction textbookish but worth reading through

Marc Rochkind: Advanced UNIX Programming

W. Richard Stevens: Advanced Programming in the UNIX Environment

Michael Kerrisk: The Linux Programming Interface

Thompson and Ritchie: Bell Systems Technical Journal “The UNIX Time-Sharing System” and all the other reprints in which they discuss the evolution of UNIX

Kernighan and Ritchie: The C Programming Language

Kernighan and Pike: The Unix Programming Environment

Abelson, Sussman, Sussman: Structure and Interpretation of Computer Programs 

philosophical/free-software

(to keep your thinking straight about why to do these things)

Richard Stallman: The GNU Manifesto – http://www.gnu.org/gnu/manifesto.en.html

GNU Project, other essays: http://www.gnu.org/philosophy/philosophy.html

Posted in books, programmingforresearch, unix | Leave a comment

a bit of knowledge of regular expressions

Today a collaborator who is working toward more automation in her ordinary computing work asked me how to use the amazing command-line browser wget to get the images out of a web page.

wget has options to grab a whole web site: if you want a page and everything in child directories that is linked off the server you can use the “recursive”, “no parents” and “no clobber” options:

wget -r -np -nc http://www.gnu.org/software/wget/manual/html_node/index.html

will give you a basic mirror of that site, and it will work quite well for that one because it does not have active content that needs to be followed heuristically.  But it will not retrieve linked files or images unless they are under the top level directory of that URL.

So this specific page is “The Illustrated Guide to a Ph.D.”, which has gone viral recently.  Note that when the openculture web site took it in they separated out the images, so if you look at the web page’s source to the page (control-U in most browsers) you will find it to be under http://www.openculture.com/2010/09/ , but its images are stored at locations such as: http://www.openculture.com/images/PhDKnowledge.001.jpg

So the simple wget command will not grab the images.  You might want to automate the process of downloading them all, and there is a very simple shell command sequence to do so.  Start by making yourself a sandbox area:

mkdir ~/sandbox
cd ~/sandbox

Then grab the top level URL:

wget http://www.openculture.com/2010/09/the_illustrated_guide_to_a_phd.html

Now you will have the file the_illustrated_guide_to_a_phd.html in your current directory.  Next experiment with grep and sed to get the list of jpg URLs:

grep '\.jpg' the_illustrated_guide_to_a_phd.html

Now that you see what lines have a .jpg in them, use the sed command to edit out everything except for the URL.  A typical line looks like:

<div class="graphic"><img alt="" src="http://www.openculture.com/images/PhDKnowledge.006.jpg" width="440" /></div>

which clearly needs to be cleaned up.  This sed command will use a regular expression that matches everything between the =” and the jpg” and prints out just that part.  It uses the \( and \) for making a “group” of the interesting part. The group is then what gets printed with the \1

grep '\.jpg' the_illustrated_guide_to_a_phd.html | sed 's/.*="\(.*jpg\)".*/\1/'

which gives you lines like:

http://www.openculture.com/images/PhDKnowledge.005.jpg

you are now ready to use wget on those individual image URLs:

JPG_LIST=`grep '\.jpg' the_illustrated_guide_to_a_phd.html | sed 's/.*="\(.*jpg\)".*/\1/'`

Then iterate through that list grabbing each URL:

for jpg_url in $JPG_LIST
do
    echo $jpg_url
    wget "$jpg_url"
done

To see this whole inline script (you can paste it in as it is):

mkdir ~/sandbox
cd ~/sandbox
wget http://www.openculture.com/2010/09/the_illustrated_guide_to_a_phd.html
JPG_LIST=`grep '\.jpg' the_illustrated_guide_to_a_phd.html | sed 's/.*="\(.*jpg\)".*/\1/'`

for jpg_url in $JPG_LIST
do
    echo $jpg_url
    wget "$jpg_url"
done

A final note: the original article by Matthew Might was on his web site and he had organized his page to have the images in the hierarchy of the HTML.  This is a more robust web site layout, and the recursive wget command would have mirrored it well:

wget -r -np -nc http://matt.might.net/articles/phd-school-in-pictures/
find matt.might.net -name '*.jpg' -print
Posted in scripting, try this, unix | Tagged , , | Leave a comment

programmers who get respect

I was just re-reading this interesting rant from Zed Shaw, author of “Learn Python The Hard Way” and other “the hard way” books: http://learnpythonthehardway.org/book/advice.html

The whole blurb is worth reading to see his point of view, although I disagree with some of what he says. I found myself really enjoying this paragraph:

People who can code in the world of technology companies are a dime a dozen and get no respect. People who can code in biology, medicine, government, sociology, physics, history, and mathematics are respected and can do amazing things to advance those disciplines.

I also started looking at Shaw’s new book “Learn C The Hard Way” and came upon this page: in which he gives the advice:

WARNING: Do Not Use An IDE

An IDE, or “Integrated Development Environment” will turn you stupid. They are the worst tools if you want to be a good programmer because they hide what’s going on from you, and your job is to know what’s going on. They are useful if you’re trying to get something done and the platform is designed around a particular IDE, but for learning to code C (and many other languages) they are pointless.

I like it when other people save me the trouble of using the harsh words.

Posted in meta, rant | Leave a comment

Using PyEphem to get the ground coordinates of a satellite

The PyEphem package is pretty amazing. The tutorial will show you many things.

I had a simple application for it: I needed to take a satellite’s orbital information and get the ground longitude/latitude under that satellite at a given time.

A satellite’s orbital information is usually given by what are called “two line elements” (TLEs) (see the example below), but some calculation is necessary to go from a TLE to the position over earth at a given time. PyEphem does it with great simplicity.

You can install PyEphem with Python’s pip packaging system:

pip install pyephem

and then:

  1. start with the satellite TLE, which you can obtain in a variety of ways. In real life we might get this from a program which grabs it from the web (for example here), but to hard-code a simple case (international space station) we can simply paste this code into a Python interpreter:
    import ephem
    import datetime
    ## [...]
    name = "ISS (ZARYA)";
    line1 = "1 25544U 98067A   12304.22916904  .00016548  00000-0  28330-3 0  5509";
    line2 = "2 25544  51.6482 170.5822 0016684 224.8813 236.0409 15.51231918798998";
  2. create a PyEphem Body object from it:
    tle_rec = ephem.readtle(name, line1, line2)
    tle_rec.compute()

    Note that the ephem.readtle() routine creates a PyEphem Body object from that TLE, and the compute() method recalculates all the parameters in the Ephem body for the current moment in time (datetime.datetime.now()).

    You can use any moment in time and calculate the values at that time with something like tle_rec.compute(datetime.datetime(2012, 1, 8, 11, 23, 42)) for January 8, 2012, 11:23:42am.

  3. Obtain the longitude and latitude from the tle_rec object:
    print tle_rec.sublong, tle_rec.sublat

    These sublong and sublat values are expressed as PyEphem Angle objects; when printed they are human readable longitude/latitude strings: -44:41:57.2 47:52:58.9. When accessed as real numbers they are in radians, so keep that in mind when adapting them to use with something like Python’s excellent Basemap package, which uses degrees instead of radians.

    So, two lines of code to go from TLE to ground longitude/latitude; pretty good, eh?

Posted in mapping, python, try this | Tagged | 3 Comments

word processor versus typesetter

Today I saw a couple of scientific papers which were written with a word processor (probably Microsoft Word, but possibly libreoffice) instead of a typesetting approach (such as TeX/LaTeX). I realized how fortunate I am that I seldom have to read papers like that: it is tiring to read. Most papers I read fortunately are typeset with TeX as a back end engine

There are two major reasons for which a scientist writing a paper (or anyone writing a nontrivial document) should not make typesetting decisions:

1. Typesetting is an old and deep art which really matters. You can get fatigue from reading a book or article that is typeset poorly. Amateurs don’t know this art. One simple example is the “optimal line length”: lines should have about 60 to 65 characters, and reading is impaired when they are longer or shorter. (Try looking through a typical book and count how many letters are on an average line.) People who set their own margins will violate this and their readers will get fatigue. There are very many more areas in typesetting where amateurish tinkering will make the reading experience subtly more tiring.

2. It is a waste of time. Fiddling with margins, fonts, boldface is not the author’s job. The author should worry about the meaning of what s/he is writing, not the format. By playing with your word processor you are indulging in something that doesn’t give that much satisfaction and certainly does not increase your productivity.

This applies to any document that is more than a few paragraphs long, but it gets even worse when mathematical notation is involved.

In LaTeX you write your text and the program does the structuring and typesetting for you, applying a style that was designed by professionals who spent a lot of time researching how to do it and who have the benefit of centuries of wisdom on book layout. A very good introduction is the classic article The Not So Short Introduction to LaTeX

There are a couple of things to keep in mind when getting going with LaTeX rather than a word processor approach:

1. Someone put it well once saying that “in TeX the easy things are hard to do and the hard things are easy to do”. Getting started on your document involves including some preamble text to specify the document structure, and using special markup commands to make lists, which might seem harder than using a word processor. If you write a lot this learning curve disappears in a hurry, and you reap the benefits of having the truly difficult and time-consuming tasks done for you automatically.

2. Make sure that you remember that “user friendliness” is a subtle matter. The acronym WYSIWYG (“what you see is what you get”) has been well parodied as WYSIAYG (“what you see is all you get”).

Now although LaTeX guides you toward a well-structured document, many people go and find the special commands that can be used to change the layout. The most frequent thing I see is people wanting tiny margins to make a paper look like it has fewer pages (two-column mode might be better for this purpose).

There is an old saying among programmers from my generation, who were writing code in the mid-1980s: you can “write FORTRAN in any language”. Since this is a rambling long-winded rant I will explain this old adage by reminding you of the context in those years. FORTRAN, which appeared in 1957, could be said to be the “first programming language”, preceding LISP (1958) and COBOL (1959).

People who learned to program with languages like C and Pascal in the 1980s tended to poo-poo FORTRAN code as being very ugly, and it really is: the widely used paradigms for programming in FORTRAN were almost painfully hard to follow. But mostly FORTRAN suffered from the fact that it was designed before people used keyboards on computers: FORTRAN was made to be put on stacks of “punch cards” and fed into a mechanical reader, so the code was formatted strangely and hard to read.

Hackers like to have “religious wars” on which programming language is “better” — just recently three of the programmers I admire most, and who are not young anymore, were having at it again on the topic of C++ and ugliness. Back in the mid-1980s young programmers were learning C and proud of it, and maybe experimenting with some trendy new languages like Smalltalk and Modula-II, and we would say that FORTRAN was gross.

But eventually, as the dialectic process of understanding such issues had time to work its magic, we realized that a lot of C code is dreadfully difficult to read, and at a certain point I even realized that some FORTRAN code was intelligently written (and modern versions of FORTRAN are almost-but-not-quite readable). Hence the expression “you can write FORTRAN in any language”, which is probably equivalent to “l’abito non fa il monaco”, or any other expression about pigs and lipstick. There is a scholarly article on it by Donn Seeley (make sure you read the PDF, their text rendition is poor).

(Don’t take this to mean that I think all languages are OK; many programming languages lack in expressiveness, or are really ugly in a variety of ways.)

So in the spirit of a rant I spent much more time on a side show. Ah well… The point is: you can do poor typesetting in LaTeX too, just as you can in MS-Word. The difference is that in LaTeX you have to work hard to do poor typesetting, while in Word you cannot do good typesetting. Some day the word processors might add some of the important points of typesetting (ligatures, intelligent spacing, good math formulae), at which point it will become possible to do good typesetting, but it will still be very difficult.

If you want to see more on this, including a lot of rants on how bad word processors are, a web search for the title of this posting gives some funny ones.

Aside | Posted on by | Leave a comment

starting with hdf5 — writing a data cube in C

pre(r)amble

Let us start using hdf5 to write out a “data cube” — in our example a series of scalar values f(x, y, z).

hdf5 is a library written in C but it can be called from many programming languages.  We will start by creating and writing a data cube from a C program (don’t worry, I will also show you a python version in another post), and then we will discuss various ways of examining this data cube. Edit a file, for example make-sample-cube.c, and start with:

the code

First include these .h files:

#include <hdf5.h>
#include <hdf5_hl.h>

This gives you access to the API. We will also need some other include files

We will need some more include files:

#include <stdlib.h>
#include <math.h>
#include <assert.h>

Let’s go top-down and write a main() routine that shows what we want to do in this program. We will have a fill_cube() routine which creates our data cube from mathematical functions, and a write_cube() routine which saves them to disk as hdf5 files. Since we’re taking a top-down approach, we declare their prototypes before the main() function:

double *fill_cube(int nx, int ny, int nz);
void write_cube(double *data, int nx, int ny, int nz, char *fname);

then the actual main():

int main(int argc, char *argv[])
{
  double *data_cube;
  int nx, ny, nz;
  nx = 64;
  ny = 57;
  nz = 50;
  char *fname = "sample_cube.h5";
  data_cube = fill_cube(nx, ny, nz);
  write_cube(data_cube, nx, ny, nz, fname);
  free(data_cube);
  return 0;
}

Now let’s write the functions we use:

/** 
 * Fills up a cube with artificial data so that the various slices
 * will be interesting to examine and visualize.  Note that this
 * routine uses malloc() to allocate memory for the cube, so the
 * caller has to use free() when it has finished using this cube.
 * 
 * @param nx 
 * @param ny 
 * @param nz 
 * 
 * @return the data cube, with space freshly allocated by malloc()
 */
double *fill_cube(int nx, int ny, int nz)
{
  int i, j, k;
  double x, y, z;
  double *data;
  data = (double *) malloc(nx*ny*nz*sizeof(*data));
  for (i = 0; i < nx; ++i) {
    for (j = 0; j < ny; ++j) {
      for (k = 0; k < nz; ++k) {
        x = (i - nx/2)/100.0;
        y = (j - ny/2)/100.0;
        z = (k - nz/2)/100.0;
        double val = (exp((-(z)*(z)/1.0
                           -(y)*(y)/1.0)
                          * (i+1)*(i+1)/10.0)
                      * (x+1)*(x+1));
        data[i*ny*nz + j*nz + k] = val;
      }
    }
  }
  return data;
}

Finally the write_cube() routine:

/** 
 * Writes a simple data cube out to disk as an hdf5 file
 * 
 * @param data 
 * @param nx 
 * @param ny 
 * @param nz 
 * @param fname 
 */
void write_cube(double *data, int nx, int ny, int nz, char *fname)
{
  /* now the various steps involved in preparing an hdf5 file */
  hid_t file_id;
  /* at this time our basic file just has the values strung out,
     so the hdf5 rank is 1 */
  hsize_t dims[3] = {nx, ny, nz};
  herr_t status;

  /* create a HDF5 file */
  file_id = H5Fcreate(fname, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT);
  /* create and write a double type dataset named "/datacube" */
  status = H5LTmake_dataset(file_id, "/datacube", 3, dims,
                            H5T_NATIVE_DOUBLE, data);
  assert(status != -1);
  /* add some hdf5 attributes with the metadata we need */
  H5LTset_attribute_int(file_id, "/datacube", "nx", &nx, 1);
  H5LTset_attribute_int(file_id, "/datacube", "nx", &nx, 1);
  H5LTset_attribute_int(file_id, "/datacube", "ny", &ny, 1);
  H5LTset_attribute_string(file_id, "/datacube", "x_units", "m");
  H5LTset_attribute_string(file_id, "/datacube", "y_units", "kg");
  H5LTset_attribute_string(file_id, "/datacube", "z_units", "degK");

  status = H5Fclose (file_id); assert(status != -1);
  assert(status != -1);
}

compiling and running

You will need the hdf5 library, which you can get on an Ubuntu or Debian system with:

sudo apt-get -u install libhdf5-serial-dev

You can compile this program with:

gcc -o make-sample-cube make-sample-cube.c -lm -lhdf5 -lhdf5_hl

You can run this program with:

./make-sample-cube

This creates a file called sample_cube.h5, and in a future post I will discuss various ways of looking at this data cube: slicing it and visualizing it.

discussion 1

Now I should find those honeyed words which allow you to wrap your head around the use of hdf5, understanding the broad strokes and the nuances at the same time. This is partly possible for this program, but you will soon learn that hdf5 goes into some brutal (and powerful) nitty gritty, at which point simple explanations miss the point entirely.

As usual with any library we #include the hdf5 .h files. Then we build a cube of data, and it’s worth mentioning in passing how we store the cube data in a C array:

C is clumsy for storing multidimensional arrays: you either hope you’re lucky and you have fixed size arrays (this happens when you’re first learning C, but not after). Otherwise you have to represent your multidimensional array using a C one-dimensional array, or do many nested memory allocations so that you can still use the syntax a[i][j][k].

After experimenting with various approaches people usually end up not using multiple memory allocations, but rather using a single one-dimensional array and index arithmetic to access the data in there. The routine fill_cube(), for example, uses this approach: what you might ideally represent as:

data[i][j][k] = val;

is instead represented as:

data[i*ny*nz + j*nz + k] = val;

this is called “row-major ordering” (where the last index is the one that varies most rapidly) and it is the convention used in C and many other languages (with the exceptions of FORTRAN and Matlab, which use column-major ordering). You probably don’t need to learn too much more, but you can read the wikipedia article on row-major ordering.

how the hdf5 writing is done

With that out of the way, let us see how we write the hdf5 file. hdf5 allows very complex and diverse ways of saving data. For example, you can have multiple datasets (these are almost always arrays of numbers) with all sorts of attributes (this is almost always metadata).

In this example we have a single dataset (our cube) and we start with very simple metadata/attributes (just the sizes and the units, which we have playfully set to be meters, kilograms and degrees Kelvin). This allows us to use the “hdf5 lite (H5LT)

The approach you can see in the code is that you take the following steps:

  1. Create an hdf5 file (it starts out empty).
  2. Make a dataset — this turns your array of data into an hdf5 “dataset”.
  3. Set some attributes (i.e. metadata).
  4. Close the file.

Note that there was no write() type of call: whatever you do with the data sets associated with an open file are guaranteed to be written out when you close the file.

discussion 2 with mild rant

If this were all you ever do with hdf5 I would say that the C interface to writing hdf5 files is a very good example of a C API. Unfortunately as soon as we go beyond this simple type of example (for example to read slices of data in efficient ways) the programming gets very detailed and ugly, and in my opinion this is one of the current design problems with hdf5: there is a nice API (the H5LT layer) for very simple cases (which you will very soon outgrow), and a powerful (but not nice) API for the full-blown library, but there is no intermediate level API for what most programmers will need most of the time.

Posted in data, rant, try this | Tagged , , , | 1 Comment