Showing posts with label Xen Notes. Show all posts
Showing posts with label Xen Notes. Show all posts

10 May, 2010

Example Xen HVM DomU

#
# Windows 2003 STD - System Centre Configuration Manager
#

import os, re
arch = os.uname()[4]
if re.search('64', arch):
    arch_libdir = 'lib64'
else:
    arch_libdir = 'lib'

kernel = "/usr/lib/xen-3.2-1/boot/hvmloader"

builder='hvm'

memory = 1024
shadow_memory = 8
name = "sccm"
vif = [ 'type=ioemu, bridge=eth0, model=rtl8139' ]
# dhcp = 'dhcp'
disk =  [
        'phy:/dev/datastore/sccm-system,hda,w'
        ,'phy:/dev/datastore/sccm-data,hdb,w'
        #, 'file:/media/iso/sccm_2007_sp2.iso,hdc:cdrom,r'
        ]

device_model = '/usr/' + arch_libdir + '/xen-3.2-1/bin/qemu-dm'
acpi=1
apic=1
localtime=1

# boot on floppy (a), hard disk (c) or CD-ROM (d)
# default: hard disk, cd-rom, floppy
boot="dc"

sdl=0
vnc=1
vncconsole=0
vncpasswd=''
vncunused=1
vncdisplay=3
stdvga=0
serial='pty'
usbdevice='tablet'

28 December, 2009

xenBackup - the source

here's the source for the backup script. please feel free to use, modify, plagiarize, mock, torture or hack up any of this bash code as your mood takes you.
#!/bin/bash
#
#   Copyright John Quinn, 2008
#
#   This program is free software: you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation, either version 3 of the License, or
#   (at your option) any later version.
#
#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#   GNU General Public License for more details.
#
#   You should have received a copy of the GNU General Public License
#   along with this program.  If not, see .

#
# xenBackup - Backup Xen Domains
#
#             Version:    1.0:     Created:  John D Quinn, http://www.johnandcailin.com/john
#

# initialize our variables
domains="null"                           # the list of domains to backup
allDomains="null"                        # backup all domains?
targetLocation="/tmp"                    # the default backup target directory
mountPoint="/mnt/xen"                    # the mount point to use to mount disk areas
xenDiskArea="/dev/skx-vg"                # the LVM volume group to use
shutdownDomains=false                    # don't shutdown domains by default
quiet=false                              # keep the chatter down
backupEngine=tar                         # the default backup engine
rsyncExe=/usr/bin/rsync                  # rsync executable
rdiffbackupExe=/usr/bin/rdiff-backup     # rdiff-backup executable
tarExe=/usr/bin/tar                      # tar executable
xmExe=/usr/sbin/xm                       # xm executable
purgeAge="null"                          # age at which to purge increments
globalBackupResult=0                     # success status of overall job

# settings for logging (syslog)
loggerArgs=""                            # what extra arguments to the logger to use
loggerTag="xenBackup"                    # the tag for our log statements
loggerFacility="local3"                  # the syslog facility to log to

# trap user exit and cleanup
trap 'cleanup;exit 1' 1 2

cleanup()
{
   ${logDebug} "Cleaning up"
   cd / ; umount ${mountPoint}

   # restart the domain
   if test ${shutdownDomains} = "true"
   then
      ${logDebug} "Restarting domain"
      ${xmExe} create ${domain}.cfg > /dev/null
   fi
}

# function to print a usage message and bail
usageAndBail()
{
   cat << EOT
Usage: xenBackup [OPTION]...
Backup xen domains to a target area. different backup engines may be specified to
produce a tarfile, an exact mirror of the disk area or a mirror with incremental backup.

   -d      backup only the specified DOMAINs (comma seperated list)
   -t      target LOCATION for the backup e.g. /tmp or root@www.example.com:/tmp
           (not used for tar engine)
   -a      backup all domains
   -s      shutdown domains before backup (and restart them afterwards)
   -q      run in quiet mode, output still goes to syslog
   -e      backup ENGINE to use, either tar, rsync or rdiff-backup
   -p      purge increments older than TIME_SPEC. this option only applies
           to rdiff-backup, e.g. 3W for 3 weeks. see "man rdiff-backup" for
           more information

Example 1
   Backup all domains to the /tmp directgory
   $ xenBackup -a -t /tmp

Example 2
   Backup domain: "wiki" using rsync to directory /var/xenImages on machine backupServer,
   $ xenBackup -e rsync -d wiki -t root@backupServer:/var/xenImages

Example 3
   Backup domains "domainOne" and "domainTwo" using rdiff-backup purging old increments older than 5 days
   $ xenBackup -e rdiff-backup -d "domainOne, domainTwo" -p 5D

EOT

   exit 1;
}

# parse the command line arguments
while getopts p:e:qsad:t:h o
do     case "$o" in
        q)     quiet="true";;
        s)     shutdownDomains="true";;
        a)     allDomains="true";;
        d)     domains="$OPTARG";;
        t)     targetLocation="$OPTARG";;
        e)     backupEngine="$OPTARG";;
        p)     purgeAge="$OPTARG";;
        h)     usageAndBail;;
        [?])   usageAndBail
       esac
done

# if quiet don't output logging to standard error
if test ${quiet} = "false"
then
   loggerArgs="-s"
fi

# setup logging subsystem. using syslog via logger
logCritical="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.crit"
logWarning="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.warning"
logDebug="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.debug"

# make sure only root can run our script
test $(id -u) = 0 || { ${logCritical} "This script must be run as root"; exit 1; }

# make sure that the guest manager is available
test -x ${xmExe} || { ${logCritical} "xen guest manager (${xmExe}) not found"; exit 1; }

# assemble the list of domains to backup
if test ${allDomains} = "true"
then
   domainList=`${xmExe} list | cut -f1 -d" " | egrep -v "Name|Domain-0"`
else
   # make sure we've got some domains specified
   if test "${domains}" = "null"
   then
      usageAndBail
   fi

   # create the domain list by mapping commas to spaces
   domainList=`echo ${domains} | tr -d " " | tr , " "`
fi

# function to do a "rdiff-backup" of domain
backupDomainUsingrdiff-backup ()
{
   domain=$1
   test -x ${rdiffbackupExe} || { ${logCritical} "rdiff-backup executable (${rdiffbackupExe}) not found"; exit 1; }

   if test ${quiet} = "false"
   then
      verbosity="3"
   else
      verbosity="0"
   fi

   targetSubDir=${targetLocation}/${domain}.rdiff-backup.mirror

   # make the targetSubDir if it doesn't already exist
   mkdir ${targetSubDir} > /dev/null 2>&1
   ${logDebug} "backing up domain ${domain} to ${targetSubDir} using rdiff-backup"

   # rdiff-backup to the target directory
   ${rdiffbackupExe} --verbosity ${verbosity} ${mountPoint}/ ${targetSubDir}
   backupResult=$?

   # purge old increments
   if test ${purgeAge} != "null"
   then
      # purge old increments
      ${logDebug} "purging increments older than ${purgeAge} from ${targetSubDir}"
      ${rdiffbackupExe} --verbosity ${verbosity} --force --remove-older-than ${purgeAge} ${targetSubDir}
   fi

   return ${backupResult}
}

# function to do a "rsync" backup of domain
backupDomainUsingrsync ()
{
   domain=$1
   test -x ${rsyncExe} || { ${logCritical} "rsync executable (${rsyncExe}) not found"; exit 1; }

   targetSubDir=${targetLocation}/${domain}.rsync.mirror

   # make the targetSubDir if it doesn't already exist
   mkdir ${targetSubDir} > /dev/null 2>&1
   ${logDebug} "backing up domain ${domain} to ${targetSubDir} using rsync"

   # rsync to the target directory
   ${rsyncExe} -essh -avz --delete ${mountPoint}/ ${targetSubDir}
   backupResult=$?

   return ${backupResult}
}

# function to a "tar" backup of domain
backupDomainUsingtar ()
{
   domain=$1

   # make sure we can write to the target directory
   test -w ${targetLocation} || { ${logCritical} "target directory (${targetLocation}) is not writeable"; exit 1; }

   targetFile=${targetLocation}/${domain}.`date '+%d%b%y'`.$$.tar.gz
   ${logDebug} "backing up domain ${domain} to ${targetFile} using tar"

   # tar to the target directory
   cd ${mountPoint}

   ${tarExe} pcfz ${targetFile} * > /dev/null
   backupResult=$?

   return ${backupResult}
}

# backup the specified domains
for domain in ${domainList}
do
   ${logDebug} "backing up domain: ${domain}"

   # make sure that the domain is shutdown if required
   if test ${shutdownDomains} = "true"
   then
      ${logDebug} "shutting down domain ${domain}"
      ${xmExe} shutdown -w ${domain} > /dev/null
   fi

   # unmount mount point if already mounted
   umount ${mountPoint} > /dev/null 2>&1

   # mount the xen disk read-only
   xenDisk=${xenDiskArea}/${domain}-disk
   test -r ${xenDisk} || { ${logCritical} "xen disk area not readable. are you sure that the domain \"${domain}\" exists?"; exit 1; }
   ${logDebug} "Mounting ${xenDisk} read-only"
   mount -r ${xenDisk} ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }

   # do the backup according to the chosen backup engine
   backupDomainUsing${backupEngine} ${domain}

   # make sure that the backup was successful
   if test $? -ne 0
   then
      ${logCritical} "FAILURE: error backing up domain ${domain}"
      globalBackupResult=1
   else
      ${logDebug} "SUCCESS: domain ${domain} backed up"
   fi
     
   # clean up
   cleanup;
done

if test ${globalBackupResult} -eq 0
then
   ${logDebug} "SUCCESS: backup of all domains completed successfully"
else
   ${logCritical} "FAILURE: backup completed with some failures"
fi

exit ${globalBackupResult}
 

backing up your xen domains

backups are boring, but we all know how important they are. backups can also be quite powerful when working with xen virtualization, since xen allows for convenient back-up and restore of entire systems.
i've recently been working on a flexible, general-purpose script enabling incremental backups of complete xen guests, optimized for secure, distributed environments; xenBackup. if you're working with xen, you might find it useful.
the xenBackup script leverages open-source components like ssh, rsync, and rdiff-backup to create a simple, efficient and functional solution.
all code and configurations have been tested on debian etch but should be useful for other *nix flavors with subtle modifications. if you're unfamiliar with xen, you might consider starting with an earlier how-to on setting up xen on your debian etch box

a general approach

the approach you take to backups obviously depends on what your guests are doing. let's consider one of the more difficult cases, backing up a xen guest with an application server and a database running on it. ideally, you'd typically:
  • take regular backups of the database.
  • take a regular incremental backup of the entire machine.
  • write the backups onto a different server on your network, and hopefully to a different geographical location too.
  • do all of this without any interruption of your service.
in this article we'll discuss a simple way of doing this.

the xenBackup script

the xenBackup script, which i've included at the end of this article, helps implement a xen backup strategy. it automates the backup of single or multiple xen guests using one of three backup methods, tar, rsync or rdiff-backup. the usage message for xenBackup is as follows:
Usage: xenBackup [OPTION]...
Backup xen domains to a target area. different backup engines may be specified to
produce a tarfile, an exact mirror of the disk area or a mirror with incremental backup.

   -d      backup only the specified DOMAIN
   -t      target LOCATION for the backup e.g. /tmp or root@www.example.com:/tmp
           (not used for tar engine)
   -a      backup all domains
   -s      shutdown domains before backup (and restart them afterwards)
   -q      run in quiet mode
   -e      backup ENGINE to use, either tar, rsync or rdiff-backup
   -p      purge increments older than TIME_SPEC. this option only applies
           to rdiff-backup, e.g. 3W for 3 weeks. see "man rdiff-backup" for
           more information

to illustrate how it could be used, let's consider a typical scenario.

scenario: a single xen server with multiple xen guests

consider a xen server with multiple guests running on it, where the database on each guest backs up locally using a database specific backup technique e.g. a regularly scheduled hot backup writing to the local file system.
xenBackup could be used to periodically backup each of the local guests from the dom0. this is safe to do on a running server since the database backup does not rely on datafile consistency, but instead on the hot backups.
alternatively, the hot backup could be avoided if each of the guests was cleanly shutdown before the backup. xenBackup supports both these modes of operation but the former is recommended.
the xenBackup command on dom0 to incrementally backup all guests to /var/backup would simply be:
$ sudo xenBackup -a -e rdiff-backup -t /var/backup
this arrangement is shown in the diagram above. additionally, this backup could be periodically pulled from another backup server using rsync over ssh. this backup server could be located on or off-site.
this could be simplified by writing the backup directly onto another server in a single xenBackup command. this is arguably less secure since you need to push the backup rather than pull it, but could be done with:
$ sudo xenBackup -a -e rdiff-backup -t root@backupserver:/var/backup
running xenBackup on multiple dom0's the following arrangement can easily be achieved:


the backups

one of the great things about xen is that the backup allows you to reconstitute a fully working xen guest from the backup area, simply with a command like:
$ sudo xen-create-image --copy=/var/backup/mymachine.rdiff-backup.mirror --ip=192.168.1.10 --hostname=mymachine
if rdiff-backup is used as the backup engine, the xen guest can easily be restored to a historical state, to as far back as increments are kept (controlled by the xenBackup purge flag, -p). see man rdiff-backup for more information on using --restore-as-of on your backup directory.

dependencies

the xenBackup script has dependencies on rsync and rdiff-backup, if you choose to use those engines. if you do, you should install those packages:
$ sudo apt-get install rsync rdiff-backup

automation

to automate your backups, consider adding a cron entry to automatically run xenBackup on your backup server. typically you should do this at a quiet time. an example cron entry is:
00 1 * * * /usr/bin/xenBackup -q -a -t /var/backup -e rdiff-backup
you should consider adding the backup server's identity file to the authorized keys of the backup user on machines to push backups to. to allow the automation of encrypted, automated backups to be pushed over ssh. read up on ssh-copy-id and ssh-keygen for more information. give very careful consideration to security when determining the user to use and what machines can access what.

a word on syslog

xenBackup logs all backup output to syslog's local3 facility. all xenBackup's log output is available in the main syslog log, /var/log/syslog. in addition, a dedicated xenBackup log can be created by adding the following to your /etc/syslog.conf file:
# xenBackup logging: log all local3's messages to /var/log/xenBackup
local3.*                        /var/log/xenBackup
if you'd like to be informed about critical backup problems by email, please refer to my earlier blog how to setup real-time email-notification for critical syslog events.

setting up xen on your debian etch box

xen is a free software virtual machine monitor for IA-32, x86-64, IA-64 and PowerPC architectures. it runs on a host operating system and allows several guest operating systems to be run on top of the host on the same computer hardware at the same time.
there are many ways to setup xen, but i've put together a simple step-by-step guide to get a working xen system based on debian etch. easy as pie.

install your host system

install a copy of debian etch. you should leave a partition available for lvm, that your virtual machines will use for disk.

create a logical volume group

  1. Get the linux logical volume manager; apt-get install lvm2
  2. Initialize your partition (or disk) for lvm; pvcreate /dev/myLvmPartition
  3. Create a logical volume group on your partition; vgcreate skx-vg /dev/myLvmPartition

install xen

you can install Xen from the debian packages. Find a list with apt-cache search xen-linux-system. you'll do something like:
# apt-get install xen-tools xen-linux-system-2.6.18-4-xen-686 xen-docs-3.0 libc6-xen
you should end up with something like the following, depending on what you chose:
# dpkg --list | grep xen
ii  libc6-xen                         2.3.6.ds1-13etch2
ii  linux-image-2.6.18-4-xen-686      2.6.18.dfsg.1-12etch2
ii  linux-modules-2.6.18-4-xen-686    2.6.18.dfsg.1-12etch2
ii  xen-docs-3.0                      3.0.3-0-2
ii  xen-hypervisor-3.0.3-1-i386-pae   3.0.3-0-2
ii  xen-linux-system-2.6.18-4-xen-686 2.6.18.dfsg.1-12etch2
ii  xen-tools                         2.8-2
ii  xen-utils-3.0.3-1                 3.0.3-0-2
ii  xen-utils-common                  3.0.3-0-2

reboot

reboot your system and make sure that you're now running the xen kernel
# uname -a
Linux yourhostmachine 2.6.18-4-xen-686 #1 SMP Thu May 10 03:24:35 UTC 2007 i686 GNU/Linux

configure a network bridge

get the bridge utils package
# apt-get install bridge-utils
add a bridging interface to /etc/network/interfaces
auto xenbr0
iface xenbr0 inet static
   pre-up brctl addbr xenbr0
   post-down brctl delbr xenbr0
   post-up iptables -t nat -F
   post-up iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -j MASQUERADE
   address 192.168.1.1
   netmask 255.255.255.0
   bridge_fd 0
   bridge_hello 0
   bridge_stp off
bring up this new interface:
# ifup xenbr0
edit /etc/sysctl.conf and uncomment the following line:
net.ipv4.conf.default.forwarding=1
enable this by:
# sysctl -p
#  echo 1 > /proc/sys/net/ipv4/conf/all/forwarding

configure your default guest system using xen-tools

you can use xen-tools to configure a default guest system. It's here where you specify what OS you want to use, how networking is configured, how disk is configured etc. This can be overridden when you create a specific guest system, but it's a good idea to configure your starting point.

try creating a guest system

you can create a guest system as follows:
# xen-create-image --ip=192.168.1.6 --hostname=mymachine
this takes a minute or two. you can follow along with the progress by tailing the log file: # tail -f /var/log/xen-tools/mymachine.log you can later delete this image using:
# xen-delete-image mymachine
you can list all your images using:
# xen-list-images

boot up that sucker

you can quickly test-boot your new system as follows.
# xm create -c mymachine.cfg
this attaches a console to it and is useful for making sure that it works o.k. when you've got everything working you'll probably want to use a start / stop technique described later.

port forward (optional)

if you want external machines to access ports on your virtual machines you can setup port forwards using IP tables e.g. if you wanted to install apache on one of your virtual machines and have it answer on http://yourhostmachine:80, you'd do the following (which forwards HTTP traffic on your eth0 interface to a virtual machine at address 192.168.1.8). add the following two lines to your network/interfaces file:
   post-up iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j DNAT --to 192.168.1.8:80
   post-up iptables -A INPUT -p tcp -m state --state NEW --dport 80 -i eth0 -j ACCEPT
i.e. your complete bridge definition might look like:
auto xenbr0
iface xenbr0 inet static
   pre-up brctl addbr xenbr0
   post-down brctl delbr xenbr0
   post-up iptables -t nat -F
   post-up iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -j MASQUERADE
   post-up iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j DNAT --to 192.168.1.8:80
   post-up iptables -A INPUT -p tcp -m state --state NEW --dport 80 -i eth0 -j ACCEPT
   address 192.168.1.1
   netmask 255.255.255.0
   bridge_fd 0
   bridge_hello 0
   bridge_stp off

cloning a machine

one of the great things about Xen, is that it makes it really simple to build a machine exactly the way that you want it, then clone it and distribute it to everyone that needs it. allowing you to:
  • Easily create development sandboxes
  • Create and distribute a standardized development environment
  • Create a machine and then build a cluster
  • Upgrade machines by duplicating them, patching the duplicates and if everything goes well, switching over to the new machines or rolling back.
anyway, here's an easy way that you can do it.

create an tarfile of an existing virtual machine

  1. create a place to store your image # mkdir /var/xen-images
  2. shutdown the machine that you're planning to clone (duh)
  3. create a mount point to mount of of your existing images # mkdir /mnt/xen
  4. mount the image you want to copy #  mount /dev/skx-vg/mymachine-disk /mnt/xen
  5. go to the mount point and tar everything up # cd /mnt/xen ; tar pcfzv /var/xen-images/myImage.tar.gz *
  6. take a peek at your nice new tar file # tar tvfz /var/xen-images/myImage.tar.gz
  7. get out of the mount point and unmount. # cd / ; umount /mnt/xen
i've created a bash script to automate this, posted at the end of this article

creating a virtual machine from a tarfile (like the one created above)

  1. temporarily comment out any installation method in /etc/xen-tools/xen-tools.conf e.g. this line debootstrap = 1
  2. create your image with whatever flags you want e.g. # xen-create-image --tar=/var/xen-images/myImage.tar.gz --ip=192.168.1.10 --hostname=flossyTheClonedMachine
  3. off you go to happy land.

starting and stopping on boot

If you want to automatically start / stop your machines on bootup, link the machine configuration in /etc/xen/auto e.g.
# mkdir /etc/xen/auto
# ln -s /etc/xen/mymachine.cfg /etc/xen/auto/

manually starting and stopping

You can easily start and stop all your xen domains with the handy /etc/init.d/xendomains script e.g. by:
# /etc/init.d/xendomains stop
You can use the usual stop, start, restart commands

utilities

take a look at XenMan (apt-get install xenman ), is a nifty little x-windows tool for managing the virtual machines running on your host.

cleaning up the debian install

if you install a debian guest, you should consider some post install steps including:
  • setup locales:
    # apt-get install locales
    # dpkg-reconfigure locales
    picking e.g.en_US.UTF-8 UTF-8
  • set the timezone:
    # tzconfig
    (note: say yes and follow the prompts even if it looks right)
  • by default your domU clock is the dom0 clock. this is probably the way you should leave it i.e. install ntp on dom0 and have your domU's use the dom0 synchronized clock. if you want your domU to operate independenly, you'll want to try: echo 1 > /proc/sys/xen/independent_wallclock

notes

If you are seeing errors like "4Gb seg fixup" spewed to the console, you need to apt-get install libc6-xen

backing up your xen guests

if you need to backup your xen guests, please take a look at my article backing up your xen domains for a discussion on the subject. a flexible script that you can use, xenBackup, is also provided.

setting up a bridging interface

in the configuration above the xen guests are only visible to the xen-host, and any services on the xen-hosts must be accesses via port forwarding, tunneling etc. for some applications, a bridging configuration works better. you can set this up by following the instructions in setting up a xen bridging interface

Ntpdate has no effect

ntpdate(1) has no effect
Xen by default just uses the dom0's clock, which isn't updated within the domU's. Either set /proc/xen/independent_wallclock to 1 (so that this domU has an indepedent clock from the host dom0, or set the clock in the dom0.

Making a tape drive available to a guest via iSCSI

Making a tape drive available to a guest via iSCSI

This is specifically for Citrix XenServer, although the principles will of course work in other Xen implementations

I recently had a scenario where I was replacing two Windows servers with XenServer guests. This was fine, but we needed a way to backup to the existing SCSI DDS4 DAT drive. After failing to make PCI passthrough work, I settled on the much nicer method of providing the tape drive via an iSCSI target on the XenServer Host (Dom0). Here is how I achieved this.
Note 1: This is totally unsupported by Citrix
Note 2: I've used the XenServer terminology "host" instead of Dom0, as this applies to the Citrix commercial implementation of Xen. It will probably work fine on OSS Xen, but you can just install the normal kernel dev packages and ignore the DDK stuff.
Note 3: This is for XenServer 4.1.0, but the principles are the same for previous versions. Just ensure you understand each step rather than following blindly.
Note 4: You'll need to enable yum repositories. Do this by editing /etc/yum.repos.d/CentOS-Base.repo, and set "enabled=1" for the Base, Updates and Addons repositories
Note 5: Thanks to the wonderful work of Blake-r the rawio patch has now been updated to work against iscsitarget-0.4.17 - when using this against a kernel newer than 2.6.22, you'll need to edit kernel/target_raw.c and replace all the psg.page occurrences with psg.page_link due to changes in the scatterlist struct. To take advantage of this, just substitute the newer versions of iscsitarget and the rawio patch in these instructions. You should be able to keep all the instructions the same, but I've not tested this yet.

yum install kernel-devel bison flex
tar -zxvf iscsitarget-0.4.14.tar.gz
cd iscsitarget-0.4.14
patch -p0 < /tmp/raw.p
make
  • scp the entire iscsitarget-0.4.14 directory to your destination Xen host, and on that host (after enabling the base repo in /etc/yum.repos.d/CentOS-Base.repo) do:
yum install make gcc
cd iscsitarget-0.4.14
make install
mkdir /lib/modules/`uname -r`/kernel/iscsi
cp kernel/iscsi_trgt.ko /lib/modules/`uname -r`/kernel/iscsi
depmod -aq
The last three steps are required because make install will not copy the kernel module correctly outside the target environment.

  • Now edit your /etc/ietd.conf and configure the tape as per the following example snippet (cat /proc/scsi/scsi for the correct HCIL values for your SCSI tape drive, this is an example only):
Target iqn.2007-04.com.example:tape0
     Lun 0 H=1,C=0,I=6,L=0,Type=rawio
     Type 1
  • Save and do /etc/init.d/iscsi-target start
  • Modify /etc/sysconfig/iptables to allow port 3260 tcp from the IP addresses running the initiator.
  • Attach to the target using the initiator of your choice.

Working around pesky controllers which change HCIL at boot

Simon, if you read this, your captcha on your blog is broken, preventing comments, and there's no contact email address. Hope you find this, and find it useful. ;)
Just bung it in your rc.local.
#!/bin/bash
#
# This script is a (very) primitive method of determining the current HCIL info
# for /etc/ietd.conf (config file for iscsi-target) to work around some
# controllers which change this ID at boot time. You'll probably want to add
# some data validation of those vars as this is purely for my environments.
# Feel free to adapt or use it in any way you see fit.
# Greig McGill. August, 2009

# First set some vars

ietd="/etc/ietd.conf"
iqn="iqn.2009-08.nz.org.aol.internal:tape0"

# Get the whole HCIL string representing the tape drive.
# IMPORTANT NOTE: I am assuming there is only one sequential access device.
# If I am wrong, you WILL need to rewrite this, or badness will happen.
# You have been warned.

HCIL="`cat /proc/scsi/scsi | grep -B2 Sequential | head -1`"

# Now get each component

H="`echo $HCIL | cut -c 11`"
C="`echo $HCIL | cut -c 23`"
I="`echo $HCIL | cut -c 30`"
L="`echo $HCIL | cut -c 38`"

# Got all that, now generate ietd.conf and restart iscsi-target

cat <<EOT >$ietd
Target $iqn
        Lun 0 H=$H,C=$C,I=$I,L=$L,Type=rawio
        Type 1
EOT

/etc/init.d/iscsi-target restart

# exit with no error

exit 0
See iSCSINotes as well for more information