The ThinLinc client can be downloaded for free from http://www.cendio.com/downloads/clients/. It is available for Windows, Mac OS X, Linux and Solaris.
To use ThinLinc to connect to Tetralith:
Tetralith SSH server host key fingerprint: 20:19:f4:6b:38:d6:e7:ac:e6:7c:8e:38:0a:7f:34:dc
After a few seconds, a window with a simple desktop session in it will appear. From the Applications menu, start a Terminal Window.
Start an interactive session with three compute cores for todays lab:
interactive -A snic2022-22-681 -t 04:00:00 -n 3
Please adjust the requested time depending on what lab you will be working on. After a minute or so you should have gotten your interactive job.
Note that your terminal prompt changed from <nsc_username>@tetralith$
to something like <nsc_username>@n424$
(or another node name), which means that you are now running on one of the compute nodes.
We will use a singularity container (a virtual computer) that mimics the UPPMAX computing environment. Once you have started the singularity container your environment will look exactly as on UPPMAX, and the software used in this workshop will be available through the module system inside the container. Use this command to start the singularity container:
singularity shell -B /proj/snic2022-22-681/users:/proj/snic2022-22-769/nobackup -B /proj/snic2022-22-681 /proj/snic2022-22-681/ngsintro.sif
Your terminal prompt changed to something like <nsc_username>@offline-uppmax$
. This means that you have moved into a “virtual computer” that mimics the UPPMAX environment.
In the singularity container type this to make the module system behave properly:
source /uppmax_init
Everything from this point and onwards should be identical to running the exercise on UPPMAX.
To close the singularity container later on just type exit
in the terminal, but don’t do that now.
Note: Since this is not actually running on uppmax, none of the queue system commands (squeue
, sbatch
, jobinfo
etc) will work.
While running the UPPMAX singularity container, create a workspace for this exercise. This will be the “cluster workspace” in which you should perform the analyses, as if you would have worked on UPPMAX.
The name of the workspace depends on what lab you are working on. For example, if you are working on the introduction to Linux lab then call the workspace “linux_tutorial”. Once the workspace is created you should go into it.
mkdir /proj/snic2022-22-769/nobackup/<nsc_username>/linux_tutorial
cd /proj/snic2022-22-769/nobackup/<nsc_username>/linux_tutorial
When you have the UPPMAX singularity container up and running you can follow regular lab instructions, except that you should not connect to UPPMAX or book a node, and you should work in the cluster workspace defined here. Also, IGV is started with a special command,see below.
To start IGV at NSC type this in the command:
/proj/snic2022-22-681/igv/igv.sh
You can use the same command both when running the UPPMAX singularity container and from a normal terminal window at NSC.
When you are done with the lab just type exit to close the singularity container:
offline-uppmax$ exit
All files and folders that you create in /proj/snic2022-22-769/nobackup/nsc_username/ while running the singularity container can be reached also from outside of the container, in this folder on Tetralith:
/proj/snic2022-22-681/users/<nsc_username>/