1 Connect to NSC via ThinLinc

The ThinLinc client can be downloaded for free from http://www.cendio.com/downloads/clients/. It is available for Windows, Mac OS X, Linux and Solaris.

To use ThinLinc to connect to Tetralith:

  1. If you haven’t already done so, download the ThinLinc client matching your local computer (i.e Windows, Linux, MacOS X or Solaris) and install it.
  2. Start the client.
  3. Change the “Server” setting to “tetralith.nsc.liu.se”.
  4. Change the “Name” setting to your Tetralith username (e.g x_abcde).
  5. You do not need to change any other settings.
  6. Enter your cluster Tetralith password in the “Password” box.
  7. Press the “Connect” button.
  8. Enter the 6-digit code generated by the two-factor authentication app on your phone.
  9. If you connect for the first time, you will see the “The server’s host key is not cached …” dialog. Verify that the fingerprint shown on your screen matches the one listed below! If it does not match, press Abort and then contact NSC Support!

Tetralith SSH server host key fingerprint: 20:19:f4:6b:38:d6:e7:ac:e6:7c:8e:38:0a:7f:34:dc

After a few seconds, a window with a simple desktop session in it will appear. From the Applications menu, start a Terminal Window.

2 Interactive session

Start an interactive session with three compute cores for todays lab:

interactive -A snic2022-22-681 -t 04:00:00 -n 3

Please adjust the requested time depending on what lab you will be working on. After a minute or so you should have gotten your interactive job.

Note that your terminal prompt changed from <nsc_username>@tetralith$ to something like <nsc_username>@n424$ (or another node name), which means that you are now running on one of the compute nodes.

3 UPPMAX singularity container

We will use a singularity container (a virtual computer) that mimics the UPPMAX computing environment. Once you have started the singularity container your environment will look exactly as on UPPMAX, and the software used in this workshop will be available through the module system inside the container. Use this command to start the singularity container:

singularity shell -B /proj/snic2022-22-681/users:/proj/snic2022-22-769/nobackup -B /proj/snic2022-22-681 /proj/snic2022-22-681/ngsintro.sif

Your terminal prompt changed to something like <nsc_username>@offline-uppmax$. This means that you have moved into a “virtual computer” that mimics the UPPMAX environment.

In the singularity container type this to make the module system behave properly:

source /uppmax_init

Everything from this point and onwards should be identical to running the exercise on UPPMAX. To close the singularity container later on just type exitin the terminal, but don’t do that now.

Note: Since this is not actually running on uppmax, none of the queue system commands (squeue, sbatch, jobinfo etc) will work.

4 Cluster workspace

While running the UPPMAX singularity container, create a workspace for this exercise. This will be the “cluster workspace” in which you should perform the analyses, as if you would have worked on UPPMAX.

The name of the workspace depends on what lab you are working on. For example, if you are working on the introduction to Linux lab then call the workspace “linux_tutorial”. Once the workspace is created you should go into it.

mkdir /proj/snic2022-22-769/nobackup/<nsc_username>/linux_tutorial
cd /proj/snic2022-22-769/nobackup/<nsc_username>/linux_tutorial

When you have the UPPMAX singularity container up and running you can follow regular lab instructions, except that you should not connect to UPPMAX or book a node, and you should work in the cluster workspace defined here. Also, IGV is started with a special command,see below.

5 IGV

To start IGV at NSC type this in the command:

/proj/snic2022-22-681/igv/igv.sh

You can use the same command both when running the UPPMAX singularity container and from a normal terminal window at NSC.

6 Exit the UPPMAX singularity container

When you are done with the lab just type exit to close the singularity container:

offline-uppmax$ exit

All files and folders that you create in /proj/snic2022-22-769/nobackup/nsc_username/ while running the singularity container can be reached also from outside of the container, in this folder on Tetralith:

/proj/snic2022-22-681/users/<nsc_username>/