Jan 19, 2014

Cloning FreeBSD Jails with ZFS as a method for provisioning Oracle 11g R2

A requirement more and more pressing in the current data centers is the ability to deploy applications and services quickly, almost immediately.

It is becoming increasingly frequent that the system administrators and DBAs maintain different versions of the same service or application to be used as production environment, testing, development, reporting.

In this post I will show, as the combination of ZFS and FreeBSD Jails can simplify these tasks to the point that are almost trivial.


clonejailz.sh

I have written this script to automate the process of cloning a FreeBSD Jail on ZFS.

The requirements to be met by a Jail to be cloned with this script are the following:

  • The Jail should have all file systems defined on a single ZFS pool. 
  • The Jail configuration must reside on a jail.conf file and the file systems of the Jail are defined in a separate fstab file.
  • The path of the Jail must take the form $JAIL_ROOT/$JAIL_NAME where the JAIL_ROOT variable defines the root directory of the Jail.
  • The dataset where resides the Jail takes the form ZPOOL/$JAIL_ROOT/$JAIL_NAME  and is the root of all other file systems associated with the Jail.

The  clonejailz.sh script is installed by copying the file in the path that indicates the JAIL_BIN variable, which is defined in the file clonejailz.rc.

The clonejailz.rc file can have three different locations, listed in order of precedence:

  • In the directory /usr/local/etc
  • In the same directory where the script clonejailz.sh is located.
  • In the user's HOME directory, adopted the form $HOME/.clonejailz.rc

The clonejailz.sh script is invoked with the following arguments:
# clonejailz.sh bjname=base_jail_name njname=new_jail_name jipadr=new_jail_ip [njpool=new_jail_pool] [script=script_to_run_inside_the_new_jail

  • bjname : It's the name of the Base Jail, ie the Jail to be cloned.
  • njname : It is the name of the Jail  that will be created as a copy of the Base Jail.
  • jipadr :It is the assigned IP address to the newly created Jail.
  • njpool : It is the name of the ZFS Pool where reside the Jail to be created. If not defined, the Jail is created as a ZFS clone  in the same pool of the Base Jail.
  • script : Optionally, you can specify a script to run additional configuration actions for the newly created Jail. This script will be executed within the he newly created Jail.

This cloning system for Jails consists of the following elements.


File 1: clonejailz.rc


It is the file that defines the main parameters governing the functioning of clonejailz.sh.

// Beginning of File clonehailz.rc
#
# http://devil-detail.blogspot.com.es/
#

# The path of the jail must have de format $JAIL_ROOT/$JAIL_NAME,
# hence the JAIL_ROOT variable defines the root directory for the Jails.
# Each Jail must have all filesystems defined on a single ZFS pool.
# The dataset where resides the Jail takes the form ZPOOL/$JAIL_ROOT/$JAIL_NAME.
# and is the root of all other file systems associated with the Jail.
#
JAIL_ROOT=/jailz


# JAIL_BIN have the path where resides clonejailz.sh and the scripts for the post cloning actions.
#
JAIL_BIN=${JAIL_ROOT}/bin

# The configuration of the jail resides in jail.conf and the
# filesystems owned by the Jail are defined in a fstab file.
# JAIL_ETC defines where resides the jail.conf file and fstab.JAIL_NAME
#
JAIL_ETC=${JAIL_ROOT}/etc

#
# JAIL_CONF the name and complete path to the jail configuration file
#
JAIL_CONF=${JAIL_ETC}/jail.conf

# The snapshots performed by this script has the format of $CLONE_PREFIX$JAIL_NAME
#
CLONE_PREFIX=clone_for_

# An alternative set of values for this variables may be the following
# JAIL_ROOT=/home
# JAIL_BIN=/usr/local/sbin
# JAIL_ETC=/usr/local/etc
# JAIL_CONF=${JAIL_ETC}/jail.conf
// End of File clonehailz.rc

 

File 2: clonejailz.sh

 This is the script that performs the cloning of the Jail.

// Beginning of File clonehailz.sh
#!/bin/sh
#
# http://devil-detail.blogspot.com.es/
#

# Find the clonejailz.rc file for defining global settings.

[ -s /usr/local/etc/clonejailz.rc ] && . /usr/local/etc/clonejailz.rc
[ -s "${0%/*}/clonejailz.rc" ] && . ${0%/*}/clonejailz.rc
[ -s "$HOME/.clonejailz.rc" ] && . $HOME/.clonejailz.rc

# Initialize script variables.

BJ_NAME=""
NJ_NAME=""
NJ_IP=""
NJ_ZPOOL=""
JAIL_SCRIPT=""
BJ_DATASET=""
NJ_DATASET=""
NJ_MOUNTP=""
TMP_CFG=""

# Declare Functions.

# Print the parameters definition.

print_help () {
 echo "Usage: clonejailz.sh bjname=base_jail_name njname=new_jail_name
              jipadr=new_jail_ip [njpool=new_jail_pool]
              [script=script_to_run_inside_the_new_jail]
              [help]"
echo ""
echo "bjname : Name of the base Jail. Mandatory."
echo "njname : Name of the new Jail. Mandatory."
echo "jipadr : Ip address of the new Jail. Mandatory."
echo "njpool : ZFS pool where will reside the new Jail. Optional."
echo "script : Script to run inside the new Jail. Optional."
echo "help   : Print this message."
return 0
}

# Validate conditions.

check_params () {
result=1

if [ -w "${JAIL_CONF}" ]
then
  check_njname && check_bjname && check_njpool && check_jipadr && check_script
  result=$?
else
  echo "${JAIL_CONF} : Can not writable."
fi
 
return $result
}

#  Check the correctness of Destination Pool for the cloned Jail.

check_njpool () {
result=1
if [ "${NJ_ZPOOL}" = "" ]
then
  result=0
else
  npool=$( zpool list | grep ${NJ_ZPOOL} | wc -l )
  if [ ${npool} -eq 0 ]
  then
    echo "${NJ_ZPOOL} : The Pool does not exist."
  else
    result=0
  fi
fi

return $result
}

# Check the Base Jail Name.

check_bjname () {
result=1
if [ "${BJ_NAME}" = "" ]
then
  echo "bjname is mandatory."
else
  cfile=$( grep ${BJ_NAME} ${JAIL_CONF} | wc -l )

  if [ $cfile -eq 0 ]
  then
    echo "${BJ_NAME} : Does not exist in ${JAIL_CONF}"
  else
    isactivejail=$( jls | grep ${BJ_NAME} | wc -l )

    if [ ${isactivejail} -eq 0 ]
    then
      result=0
    else
      echo "${BJ_NAME} : must be inactive."
    fi
  fi
fi

return $result
}

# Check that the New Jail and its mount point does not exist in jail.conf.

check_njname () {
result=1

if [ "${NJ_NAME}" = "" ]
then
  echo "njname is mandatory."
else
  cfile=$( grep ${NJ_NAME} ${JAIL_CONF} | wc -l )

  if [ $cfile -eq 0 ]
  then
    if [ -d ${NJ_MOUNTP} ]
    then
      echo "${NJ_MOUNTP} : Already exist."
    else
      result=0
    fi
  else
    echo "${NJ_NAME} : Already exist in ${JAIL_CONF}"
  fi
fi
return $result
}

# Parse the IP address for the New Jail.

check_jipadr () {
result=1
IPregex="^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"

if [ "${NJ_IP}" = "" ]
then
  echo "jipadr is mandatory."
else
  if ( echo $NJ_IP | grep -qs -E $IPregex )
  then
    result=0
  else
    echo " ${NJ_IP}: Invalid ipv4 format."
  fi
fi
return $result
}

# Check de executions rights for the supplied script, if exist.

check_script () {
result=1
if [ "${JAIL_SCRIPT}" = "" ]
then
  result=0
else
  if [ -x "${JAIL_BIN}/${JAIL_SCRIPT}" ]
  then
    result=0
  else
    echo "${JAIL_BIN}/${JAIL_SCRIPT} is not an executable script."
  fi
fi
return $result
}

# Create a new entry in jail.conf for the new Jail and a fstab file in JAIL_ETC.

clone_jail_conf () {
  result=1
  # Identify the fstab file of the Base Jail.
  oldfstab=$( grep fstab ${TMP_CFG} | cut -f2 -d'=' | cut -f1 -d';' |cut -f2 -d '"' )

  if [ -r "${oldfstab}" ]
  then
    # Define the fstab file name for the new Jail.
    newfstab=${JAIL_ETC}/fstab.${NJ_NAME}
    # Create the new fstab file from the original fstab file.
    sed -E 's/'"${BJ_NAME}"'/'"${NJ_NAME}"'/g' < ${oldfstab} > ${newfstab}
  fi

  # Create the new entry in the jails.conf for the new Jail.
  sed -E '
         s/[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/'"${NJ_IP}"'/g
         s/'"${BJ_NAME}"'/'"${NJ_NAME}"'/g
  ' < ${TMP_CFG}  >> ${JAIL_CONF}

  result=$?
  [ $result -ne 0 ] && echo "An error has occurred during jail.conf actualization."
  return $result
}

# Make a zfs clone of the BASE JAIL, or send it to a new ZFS POOL, if it was defined

clone_zfs() {
  result=1
  snapshot_name=${BJ_DATASET}@${CLONE_PREFIX}${NJ_NAME}
 

  # First of all, create an snapshot on the ZFS filesystem where resides the  Base Jail.

  zfs snapshot ${snapshot_name}
  result=$?
  if [ ${result} -eq 0 ]
  then
    if [ "${NJ_ZPOOL}" = "" ]
    then
      # If have not defined a destination POOL for new JAIL,
      # then perform a clone in the POOL where resides the BASE JAIL.
      NJ_DATASET=$( echo ${BJ_DATASET} | cut -f1 -d '/' )${NJ_MOUNTP}
      zfs clone ${snapshot_name} ${NJ_DATASET}
      result=$?
    else
      # If exist a destination POOL make a copy of the BASE_JAIL,
      # transfer it to the NEW POOL and finally rename it to the NEW_JAIL
      NJ_DATASET=${NJ_ZPOOL}${NJ_MOUNTP}
      zpool_root_fs=${NJ_ZPOOL}${JAIL_ROOT}

      zfs create ${zpool_root_fs}

      zfs send ${snapshot_name} | zfs receive -euv ${zpool_root_fs}

      zfs rename ${zpool_root_fs}/${BJ_NAME} ${NJ_DATASET}
      zfs set mountpoint=${NJ_MOUNTP} ${NJ_DATASET}
      result=$?
    fi
  else
    echo "${snapshot_name} : cannot be created."
  fi

  [ $result -ne 0 ] && echo "${BJ_DATASET}: cannot be cloned."
  return $result
}

#  Start the New Jail, add IP address to the host file and execute the supplied script.

jail_setup() {
  result=1

  # Start the New Jail.

  jail -f ${JAIL_CONF} -c ${NJ_NAME}

  jid=$( jls -j ${NJ_NAME} -h jid | tail -1 )
  result=$?

  if [ $result -eq 0 ]
  then
    # Add the hostname and the IP to the hosts file.

    echo "${NJ_IP}   ${NJ_NAME}" >> ${NJ_MOUNTP}/etc/hosts

    if [ "${JAIL_SCRIPT}" != "" ]
    then
      # If a script is supplied, then is executed inside the Jail.
      tmp_jail=${NJ_MOUNTP}/tmp

      cp ${JAIL_BIN}/${JAIL_SCRIPT} ${tmp_jail}

      jexec $jid /tmp/${JAIL_SCRIPT}
    fi

    # Stop the Jail.

    jail -f ${JAIL_CONF} -r ${NJ_NAME}
    result=$?
  else
    echo " Invalid JID."
  fi

  if [ $result -eq 0 ]
  then
    echo "${NJ_NAME} was successfully created."
  else
    echo "${NJ_NAME} was created with errors."
  fi
  return $result
}

# The script begins parsing the input parameters.

result=1
if [ $# -gt 0 ]
then
  while [ $# -gt 0 ]
  do
    case "${1}" in
        bjname=*) BJ_NAME=${1#bjname=}  ;;
        njname=*) NJ_NAME=${1#njname=}   ;;
        jipadr=*) NJ_IP=${1#jipadr=}     ;;
        njpool=*) NJ_ZPOOL=${1#njpool=}  ;;
        script=*) JAIL_SCRIPT=${1#script=}     ;;
        help) print_help ; exit 1;;
      *)  break;;
    esac
    shift
  done
else
  print_help
  exit $result
fi

NJ_MOUNTP=${JAIL_ROOT}/${NJ_NAME}

# Next, validate the parameters.

if ( check_params )
then
  # If the script  parameters was ok, go ahead.

  TMP_CFG=/tmp/tmp.${NJ_NAME}

  # Make a backup of the jail.conf file.
  now=$( date +"%m%d%Y%H%M%S" )

  cp  -p ${JAIL_CONF}  ${JAIL_CONF}.${now}
  

# Create a temp file with a copy of the Base Jail configuration.

  sed -En '/'"$BJ_NAME"' *{/,/}/ p' < ${JAIL_CONF} > ${TMP_CFG}

  bjpath=$( grep path ${TMP_CFG} | cut -f2 -d'=' | cut  -f1 -d';' )

  BJ_DATASET=$( zfs list -H -o name ${bjpath} )

  result=$?

  if [ ${result} -eq 0 ]
  then
     # Finally proceed with the cloning process.

     clone_zfs && clone_jail_conf && jail_setup
     result=$?
  else
    echo "${BJ_NAME} It has no associated ZFS filesystem."
 fi
fi
exit $result
// End of File clonejailz.sh

File 3: cloneorahome.sh

As a final step of the cloning process, optionally, inside the new Jail you can run a script to perform additional configuration tasks. The name of the script  to be run is indicated in the parameter script=; clonejailz.sh  starts the new Jail, copy the script in the path /var/tmp and runs it.

Here's an example script where the oraInventory is reconstructed from the ORACLE_HOME directory, best practice to clone the installation directory where the Oracle RDBMS resides.


// Beginning of File cloneorahome.sh
#!/bin/sh

su - oracle -c '
ORACLE_HOME=/oracle/product/11.2.0
ORACLE_SID=ORATEST
NLS_LANG=American_america.WE8ISO8859P15
ORA_NLS11=${ORACLE_HOME}/nls/data
PATH=$PATH:$ORACLE_HOME/bin

export PATH
export ORACLE_BASE
export ORACLE_HOME
export ORACLE_SID
export NLS_LANG
export ORA_NLS33

rm -r /oracle/oraInventory/ContentsXML
rm -r /oracle/oraInventory/logs

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_BASE="/oracle" ORACLE_HOME="/oracle/product/11.2.0" OSDBA_GROUP=dba OSOPER_GROUP=oper INVENTORY_LOCATION=/oracle/oraInventory -defaultHomeName -O -ignorePrereq -jreloc /usr/lib/jvm/java-6-openjdk

'>> /var/log/cloneorahome.log 2>&1
/oracle/product/11.2.0/root.sh >> /var/log/cloneorahome.log 2>&1
// Fin del Fichero cloneorahome.sh

Using clonejailz.sh

To illustrate with a practical example the using of this script, take as an starting point, my environment  for Oracle 11gR2 under FreeBSD and create a zpool on a partition on my testing laptop.


We will create a cloned environment from debora Jail,  will call it oratest1.
  
First of all, we will set up a ZFS pool on which we will create the cloned Jail. We use the partition 12 which we have previously labeled freedsk0.

root@morsa:/root # gnop create -S 4096 /dev/gpt/freedsk0
root@morsa:/root # zpool create -m none datapool0 /dev/gpt/freedsk0.nop
root@morsa:/root # zpool status
  pool: datapool0
 state: ONLINE
  scan: none requested
config:

        NAME                STATE     READ WRITE CKSUM
        datapool0           ONLINE       0     0     0
          gpt/freedsk0.nop  ONLINE       0     0     0

errors: No known data errors

  pool: fbsdzpool1
 state: ONLINE
  scan: none requested
config:

        NAME            STATE     READ WRITE CKSUM
        fbsdzpool1      ONLINE       0     0     0
          gpt/freedsk1  ONLINE       0     0     0

errors: No known data errors
 
Run the script clonejailz.sh with the following parameters:

root@morsa:/root #/jailz/bin/clonejailz.sh bjname=debora njpool=datapool0 njname=oratest1 jipadr=127.0.0.100  script=cloneorahome.sh
receiving full stream of fbsdzpool1/jailz/debora@clone_for_oratest1 into datapool0/jailz/debora@clone_for_oratest1
received 7.56GB stream in 392 seconds (19.7MB/sec)
oratest1: created
stty: standard input: Inappropriate ioctl for device
Starting periodic command scheduler: crond.
stty: standard input: Inappropriate ioctl for device
Asking all remaining processes to terminate...done.
All processes ended within 1 seconds....done.
/etc/rc0.d/S31umountnfs.sh: line 45: /etc/mtab: No such file or directory
Deconfiguring network interfaces...done.
Cleaning up ifupdown....
Unmounting temporary filesystems...umount: tmpfs: must be superuser to umount
umount: tmpfs: must be superuser to umount
umount: tmpfs: must be superuser to umount
umount: tmpfs: must be superuser to umount
failed.
Deactivating swap...failed.
Unmounting local filesystems...umount2: Operation not permitted

oratest1: removed
oratest1 was successfully created.

Within the newly created Jail -oratest1- check the log file /var/log/cloneorahome.log generated by the script cloneorahome.sh.

// Beginning of File /jailz/otatest1/var/log/cloneorahome.log
./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/oracle" "ORACLE_HOME=/oracle/product/11.2.0" "oracle_install_OSDBA=dba" "oracle_install_OSOPER=oper" "INVENTORY_LOCATION=/oracle/oraInventory" -defaultHomeName   -ignorePrereq  -jreloc  /usr/lib/jvm/java-6-openjdk  -defaultHomeName -silent -noConfig -nowait
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4096 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-01-01_09-07-58PM. Please wait ...Oracle Universal Installer, Version 11.2.0.1.0 Production
Copyright (C) 1999, 2009, Oracle. All rights reserved.

You can find the log of this install session at:
 /oracle/oraInventory/logs/cloneActions2014-01-01_09-07-58PM.log

.................................................................................................... 100% Done.

Installation in progress (Wednesday, January 1, 2014 9:08:20 PM CET)
............................................................................                                                    76% Done.
Install successful

Linking in progress (Wednesday, January 1, 2014 9:08:31 PM CET)
Link successful

Setup in progress (Wednesday, January 1, 2014 9:09:19 PM CET)
Setup successful

End of install phases.(Wednesday, January 1, 2014 9:11:08 PM CET)
Starting to execute configuration assistants
The following configuration assistants have not been run. This can happen because Oracle Universal Installer was invoked with the -noConfig option.
--------------------------------------
The "/oracle/product/11.2.0/cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
The "/oracle/product/11.2.0/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the configuration assistants. This file may be used to run the configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.

--------------------------------------
WARNING:
The following configuration scripts need to be executed as the "root" user.
/oracle/product/11.2.0/root.sh
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts
   
The cloning of OraHome1 was successful.
Please check '/oracle/oraInventory/logs/cloneActions2014-01-01_09-07-58PM.log' for more details.
Check /oracle/product/11.2.0/install/root_oratest1_2014-01-01_21-11-09.log for the output of root script
 // End of File /jailz/otatest1/var/log/cloneorahome.log

To verify the correct operation of the Jails start  debora and oratest1 with their databases, we start it.

root@morsa:/root # sysctl kern.ipc.shmmax=214743648
root@morsa:/root # jail -f /jailz/etc/jail.conf -c debora 
debora: created
Starting periodic command scheduler: crond.
root@morsa:/root # jail -f /jailz/etc/jail.conf -c oratest1 

oratest1: created
Starting periodic command scheduler: crond.

root@morsa:/root # jls
   JID  IP Address      Hostname                      Path
     2  127.0.0.25      debora                        /jailz/debora
     3  127.0.0.100     oratest1                      /jailz/oratest1



From the console ttyv1-Alt + F2- we connect to debora Jail and start the database ORATEST.

root@morsa:/root # jexec debora /bin/sh
sh-3.2# uname -a
Linux debora 2.6.16 FreeBSD 9.1-RELEASE-p4 #0: Mon Jun 17 11:42:37 UTC 2013 i686
GNU/Linux
sh-3.2# su - oracle
oracle@debora:~$ . ./ORATEST.sh
oracle@debora:~$ sqlplus /nolog
SQL*Plus: Release 11.2.0.1.0 Production on Wed Jan 1 21:44:09 2014
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> conn / as sysdba
Connected.
SQL> startup
ORACLE instance started.
Total System Global Area 1071333376 bytes
Fixed Size 1341312 bytes
Variable Size 750782592 bytes
Database Buffers 314572800 bytes
Redo Buffers 4636672 bytes
Database mounted.
Database opened.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 ORATEST
debora
11.2.0.1.0 01-JAN-14 OPEN NO 1 STOPPED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO

From the console ttyv2 -Alt+F3- we connect to oratest1 Jail and start the database ORATEST.

root@morsa:/root # jexec 3 /bin/sh
sh-3.2# uname -n
oratest1
sh-3.2# su - oracle
oracle@oratest1:~$ . ./ORATEST.sh
oracle@oratest1:~$ sqlplus /nolog
SQL*Plus: Release 11.2.0.1.0 Production on Wed Jan 1 21:44:09 2014
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 1071333376 bytes
Fixed Size 1341312 bytes
Variable Size 750782592 bytes
Database Buffers 314572800 bytes
Redo Buffers 4636672 bytes
Database mounted.
Database opened.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 ORATEST
oratest1
11.2.0.1.0 01-JAN-14 OPEN NO 1 STOPPED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO
 Done.

Final Thoughts

ZFS and FreeBSD Jails are technologies that allow us to easily duplicate an environment (with ZFS snapshots) and run in isolation newly created environment (with Jails).

Of course there are other alternatives to the combination of ZFS and Jails; any filesystem that allows for snapshots can serve this purpose. 


You can use other alternatives to FreeBSD Jails, like LXC and OpenVZ in Linux environments. In UNIX environments we can cite AIX Workload paritions and Solaris Zones.

Any way, ZFS+FreeBSD Jails is a great solution for rapid and easy service provisioning, as I have proved.

No comments:

Post a Comment

Comments are welcome, I encourage you to contribute by proposing topics of your interest to develop in this blog.