Local Container Build Process
NOTE: If building a Solaris 8 or Solaris 9 Branded container, see Container Farm Solaris 8/9 Container Build Process.Zone Configuration and Initial Boot on first cluster node
If not already done, create parent directory for zone root mountpoints
hp001# mkdir /tech/zones hp001# chmod 700 /tech/zones Create the zone root mountpoint.
The zonepath should be the same as the chosen hostname of the zone, in this case isdzone1. hp001# cd /tech/zones hp001# mkdir isdzone1 Choose a free root 18Gb LUN by consulting the SAN
Native containers do not require much disk space, so an 18Gb LUN divided into 9Gb root volume and a 9Gb /tech volume will be adequate.Update the Storage Tracking page to indicate which LUN was used.
If the LUN status from vxdisk list is nolabel, Use the Solaris format utility to label it.
hp001# format [ Select desired emcpower device and label it ]
Initalize a LUN for the zone root.
hp001# vxdisk init <accessname> format=cdsdiskIf the error is raised,
VxVM vxdisk ERROR V-5-1-5433 Device emcpower5: init failed:Disk already initializedthen do the following:
hp001# vxdisk destroy <accessname> hp001# vxdisk init <accessname> format=cdsdiskCreate a diskgroup, volumes and VXFS filesystems for the zone root.
Make the /tech volume 9Gb, and allocate the remainder of the LUN to /. hp001# vxdg init isdzone1_root <accessname> hp001# vxassist -g isdzone1_root make isdzone1_techvol 9000m hp001# vxassist -g isdzone1_root maxsizeMaximum volume size: 9566172 (9333Mb) hp001# vxassist -g isdzone1_root \make isdzone1_rootvol <maxsize_value> hp001# mkfs -F vxfs /dev/vx/rdsk/isdzone1_root/isdzone1_rootvol hp001# mkfs -F vxfs /dev/vx/rdsk/isdzone1_root/isdzone1_techvol hp001# mount -F vxfs /dev/vx/dsk/isdzone1_root/isdzone1_rootvol /tech/zones/isdzone1 Verify that the container's root directory is mounted.
hp001# cd /tech/zones/isdzone1 hp001# df -h .Filesystem size used avail capacity Mounted on/dev/vx/dsk/isdzone1_root/isdzone1_rootvol17G 18M 17G 1% /tech/zones/isdzone1Set the permissions on the mounted mountpoint to 700:
hp001# cd .. hp001# chmod 700 isdzone1 hp001# ls -l total 2 drwx------ 2 root root 512 Feb 17 12:46 isdzone1 Create a new zone configuration file.
The file is temporary and can be discarded after the zone has been created. Copy the contents of the example configuration file below into a text file on the server and edit as necessary.create -bset zonename=<zonename> (a)set zonepath=/tech/zones/<zonename>set autoboot=false (b)set brand=nativeset ip-type=sharedadd fsset dir=/techset special=/dev/vx/dsk/<zonename>_root/<zonename>_techvolset raw=/dev/vx/rdsk/<zonename>_root/<zonename>_techvolset type=vxfsendadd fsset dir=/etc/globalzoneset special=/etc/nodenameset type=lofsset options=roendadd netset physical=<public_nic> (d)set address=<public-ip-address>endadd netset physical=<backup_nic> (d)set address=<backup-ip-address>endset scheduling-class=FSSset pool=<zonename> (i)add capped-cpuset ncpus=<number> (h)endset cpu-shares=<shares> (h)(e)add capped-memoryset physical=<amount_of_memory> (h)(f)set swap=<amount_of_memory> (f)(g)(h)endset max-shm-memory=<amount_of_memory> (h)(f)verifycommitexitNotes on the zone configuration parameters above:
(a) The zonepath is the path in the global zone which contains the root of the zone. Normally, the zonepath will contain three subtrees: root, dev, and (when detached) zonecfg.xml
(b) Autoboot must be set to false because the bringup and shutdown of zones is controlled thru VCS.
(d) Since ip-type is Shared, this must be an interface which is already plumbed in the global zone. The instance name does not contain a colon and virtual instance. It is of the form nxge0, qge0, e1000g0, etc. All IP settings for shared-ip are controlled in the global zone.
(e) In the Hartford environment, the number of shares equals the number of CPU threads which this zone will be granted when the zone is contending for CPU cycles with other zones on the server. If there are free CPU resources, the zone can use them up to the limit of their CPU cap. See Container Farm Operations Guide for more information on Solaris resource controls.
(f) Memory quantities can be specified as bytes (numeric value), or by using suffixes k, m, or g. Example: 4000000000 or 4000m or 4g.
(g) The initial limit on swap utilization equal to the physical memory assigned to the container. This value can be changed based on the needs of the application in the container.
(h) The values of container resource controls are set based on the size of the container- small, medium, or large. See Container Farm Guidelines for the appropriate CPU and memory resource values.
(i) This value should be set to 'active' if using aggregated cpu pools. If using custom CPU pools the pool name needs to equal the zone name.
Configure the zone:
hp001# zonecfg -z <zonename> -f <path_to_config_file> Check the state of the newly created/configured zone
hp001# zoneadm list -cv ID NAME STATUS PATH0 global running /- isdzone1 configured /tech/zones/isdzone1Install the configured zone.
hp001# zoneadm -z isdzone1 install Preparing to install zone Creating list of files to copy from the global zone.Copying <17000> files to the zone..........Initializing zone product registry.Determining zone package initialization order.Preparing to initialize <1313> packages on the zone.Initialized <1313> packages on zone.Zone <isdzone1> is initialized.The file <...> contains a log of the zone installation.Verify the state of the zone, one more time
hp001# zoneadm list -cv ID NAME STATUS PATH0 global running /.........- isdzone1 installed /tech/zones/isdzone1.........Create a sysidcfg file in the zone's /etc directory
Do this to avoid having to perform firstboot ID actions. This file can be copied from the /etc directory of an existing zone and the hostname entry changed to match the current zone: hp001# cd /tech/zones/isdzone1/root/etc hp001# vi sysidcfgname_service=NONE root_password=4n430cknetwork_interface=PRIMARY{hostname=<hostname>}keyboard=US-Englishsystem_locale=Cterminal=vt100security_policy=nonenfs4_domain=dynamictimezone=US/Eastern hp001# Boot the zone and monitor the initial boot process from the zone's console:
hp001# zoneadm -z isdzone1 boot hp001# zlogin -C -e[ isdzone1[Connected to zone 'isdzone1' console]SunOS Release 5.10 Version Generic_141444-09 64-bitCopyright 1983-2009 Sun Microsystems, Inc. All rights reserved.Use is subject to license terms.Hostname: isdzone1Loading smf(5) service descriptions: 151/151.........rebooting system due to change(s) in /etc/default/init[NOTICE: Zone rebooting].........isdzone1 console login: [.[Connection to zone "isdzone1" console closed] hp001#Shut down the zone and deport the diskgroup
hp001# zoneadm -z isdzone1 halt hp001# cd / hp001# zoneadm -z isdzone1 detach hp001# umount /tech/zones/isdzone1 hp001# vxdg deport isdzone1_rootIntegration of Container into Veritas Cluster framework
Create a VCS service group for the container
NOTE: The name of the service group MUST be the same as the hostname of the container, to ensure proper operation of VCS triggers.If using build script-VCS5.0 vs 5.1 hp001# haconf -makerw hp001# hagrp -add isdzone1 hp001# hagrp -modify isdzone1 SystemList \ hp001 1 hp002 2 hp003 3 hp004 4 hp001# hagrp -modify isdzone1 PreOnline 1 hp001# hagrp -modify isdzone1 FailOverPolicy Load hp001# hagrp -modify isdzone1 AutoStart 0 hp001# hagrp -modify isdzone1 AutoStartPolicy Load hp001# hagrp -modify isdzone1 AutoStartList \ hp001 hp002 hp003 hp004 hp001# hagrp -modify isdzone1 AutoFailOver 0 hp001# hagrp -modify isdzone1 AutoRestart 0 hp001# hagrp -display isdzone1Create a DiskGroup resource:
hp001# hares -add isdzone1_dg DiskGroup isdzone1 hp001# hares -modify isdzone1_dg DiskGroup isdzone1_root hp001# hares -modify isdzone1_dg Enabled 1Add Mount resource for root filesystem in diskgroup and make it dependent on the diskgroup being online:
hp001# hares -add isdzone1_mntroot Mount isdzone1 hp001# hares -modify isdzone1_mntroot BlockDevice \/dev/vx/dsk/isdzone1_root/isdzone1_rootvol hp001# hares -modify isdzone1_mntroot MountPoint /tech/zones/isdzone1 hp001# hares -modify isdzone1_mntroot FSType vxfs hp001# hares -modify isdzone1_mntroot Enabled 1 hp001# hares -modify isdzone1_mntroot FsckOpt %-y hp001# hares -probe isdzone1_mntroot -sys hp001 hp001# hares -link isdzone1_mntroot isdzone1_dgCreate the resource for the zone and make it dependent on the root filesystem being mounted
VCS5.1 hagrp –modify zonenamez ContainerInfo? Name zonename Type Zone Enabled 1
hp001# hares -add isdzone1_zone Zone isdzone1 hp001# hares -modify isdzone1_zone ZoneName isdzone1 hp001# hares -modify isdzone1_zone Enabled 1 hp001# hares -link isdzone1_zone isdzone1_mntrootIF container is a production container, set it to automatically failover and start
Production containers will have the ability to restart in the event of an unscheduled node outage or when the cluster is first started.Non-production containers are set to require manual intervention to start following failures or at cluster boot time.
hp001# hagrp -modify isdzone1 AutoStart 1 hp001# hagrp -modify isdzone1 AutoRestart 1 hp001# hagrp -modify isdzone1 AutoFailOver 1Commit the VCS changes to main.cf
hp001# haconf -dumpBring up the service group and verify that the storage is mounted and the zone has been booted
hp001# hagrp -online isdzone1 -sys hp001 hp001# vxdg listNAME STATE IDisdzone1_root enabled,cds 1263496791.20. hp002 hp001# vxprint -g isdzone1_rootTY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg isdzone1_root isdzone1_root - - - - - -
dm c6t20d182 emcpower0s2 - 56554240 - - - -
v isdzone1_rootvol fsgen ENABLED 36864000 - ACTIVE - -
pl isdzone1_rootvol-01 isdzone1_rootvol ENABLED 36864000 - ACTIVE - -
sd c6t20d182-01 isdzone1_rootvol-01 ENABLED 36864000 0 - - -
v isdzone1_techvol fsgen ENABLED 18432000 - ACTIVE - -
pl isdzone1_techvol-01 isdzone1_techvol ENABLED 18432000 - ACTIVE - -
sd c6t20d182-02 isdzone1_techvol-01 ENABLED 18432000 0 - - -
hp001# zoneadm list -cvID NAME STATUS PATH BRAND IP0 global running / native shared2 isdzone1 running /tech/zones/isdzone1 native shared hp001# zlogin isdzone1[Connected to zone 'isdzone1' pts/8]Last login: Thu Jan 21 18:09:13 on pts/8Sun Microsystems Inc. SunOS 5.10 Generic January 2005isdzone1# df -h
Filesystem size used avail capacity Mounted on
/ 17G 1.1G 16G 7% /
/dev 17G 1.1G 16G 7% /dev
/lib 7.9G 4.5G 3.3G 58% /lib
/platform 7.9G 4.5G 3.3G 58% /platform
/sbin 7.9G 4.5G 3.3G 58% /sbin
/tech 8.7G 8.8M 8.6G 1% /tech
/usr 7.9G 4.5G 3.3G 58% /usr
proc 0K 0K 0K 0% /proc
ctfs 0K 0K 0K 0% /system/contract
mnttab 0K 0K 0K 0% /etc/mnttab
objfs 0K 0K 0K 0% /system/object
swap 121G 352K 121G 1% /etc/svc/volatile
.../libc_psr_hwcap2.so.1 7.9G 4.5G 3.3G 58% /platform/sun4v/lib/libc_psr.so.1
.../libc_psr/libc_psr_hwcap2.so.1 7.9G 4.5G 3.3G 58% /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
swap 121G 32K 121G 1% /tmp
swap 121G 32K 121G 1% /var/run
isdzone1# exit hp001#Save the current zone configuration to a file in preparation for copy to other cluster nodes:
hp001# cd /cluster/private hp001# zonecfg -z isdzone1 export > isdzone1.cfgEdit the saved zonecfg file to resolve a problem with parameters in the wrong order
hp001# vi isdzone1.cfg(move the three lines)
add capped-memoryset physical=<value>end(to just before the first 'add rctl' line in the file)
Configuration of zone on other cluster nodes
Note the switch to other cluster host for the following series of steps!If not already done, create parent directory of zone root mountpoints
hp002# mkdir /tech/zones hp002# chmod 700 /tech/zonesCreate the zone root mountpoint. The final element of the zonepath should be the same as the chosen hostname of the zone, in this case isdzone1.
hp002# cd /tech/zones hp002# mkdir isdzone1Copy the zone configuration from the first node to a file on the second. Configure the zone on the second node and verify that the zone is in the "configured" state.
hp002# zonecfg -z isdzone1 -f /cluster/private/isdzone1.cfg hp002# zoneadm list -cvID NAME STATUS PATH0 global running /- isdzone1 configured /tech/zones/isdzone1Verify that the zone can be attached to the second node
hp002# hagrp -switch isdzone1 -to hp002Repeat the steps in section on all remaining cluster nodes which will host this container.
Zone Postbuild Steps
Execute the following commands to copy Hartford-specific configuration files from the global zone to the local zone:
hp002# cd /etc hp002# cp passwd shadow issue issue.ssh motd profile nsswitch.conf \auto_master auto_home auto_nas .login /tech/zones/isdzone1/root/etc hp002# cp profile.no.direct.login.IDs /tech/zones/isdzone1/root/etc hp002# chmod 644 /tech/zones/isdzone1/root/etc/profile.no.direct.login.IDs hp002# cp ssh/sshd_config /tech/zones/isdzone1/root/etc/ssh hp002# cd /etc/default hp002# cp login passwd inetinit nfs /tech/zones/isdzone1/root/etc/default hp002# cd /etc/ftpd hp002# cp ftpusers /tech/zones/isdzone1/root/etc/ftpd hp002# cd /etc/security hp002# cp policy.conf /tech/zones/isdzone1/root/etc/security hp002# cd /etc/skel hp002# cp local* /tech/zones/isdzone1/root/etc/skel hp002# cd / hp002# cp .profile /tech/zones/isdzone1/root hp002# cd /tech/support/bin hp002# cp show-server-config.sh /tech/zones/isdzone1/root/tech/support/bin hp002# cd /tech/support/etc hp002# cp isd-release /tech/zones/isdzone1/root/tech/support/etc hp002# cd /opt hp002# find local | cpio -pdm tech/zones/isdzone1/root/optInstall the VSA agent and reboot the container to register it.
hp002# zlogin isdzone1isdzone1# /net/isdistatus/tech/install/vsa/sun_vsa_install-1.kshisdzone1# exit[Connection to zone isdzone1 pts/5 closed] hp002# hares -offline isdzone1_zone -sys `hostname` hp002# hares -online isdzone1_zone -sys `hostname`Login to the zone and run the VSA scan. Examine the results of the VSA scan and remedy any violations found
hp002# zlogin isdzone1isdzone1# /etc/vsa/bin/dragnet -sisdzone1# cd /var/admisdzone1# grep ^VIOL <hostname>.e-admin*Disable ufsdumps
- ufsdumps in containers do not work - container does not see its filesystem as UFS.
- Edit root's crontab file and comment the ufsdump line.
- Add a comment explaining that this has been explicitly done for a container
- Crontab entry, when done, would look similar to this:
# Disabled ufsdumps and farmstat cannot run on container
15 3 * * 0 /usr/lib/fs/nfs/nfsfind
# The following line flushes the sendmail queue hourly
0 * * * * /usr/lib/sendmail -q
#*********************************************************************
#40 13 * * 3 /tech/support/bin/ufsdump_standard_nfs.ksh#*********************************************************************
# This is the i-Status script run to the virtual server isdistatus which is either isdsunsc01 or isdsunsc02
15 22 * * 0 /net/isdistatus.thehartford.com/tech/apache/htdocs/i-Status/bin/init_current.config > /dev/null 2>&1
# This is the getconfig script run to the virtual server isdistatus which is either isdsunsc01 or isdsunsc02
45 22 * * 0 /net/isdistatus.thehartford.com/tech/apache/htdocs/server-config/bin/get-system-config.sh > /dev/null 2>&1
#*********************************************************************
# This cleans up /tech/core older than 7 days
0 5 * * * find /tech/core/* -a -mtime +7 -ls -exec rm {} \; >> /tech/support/logs/remove_corefiles.log 2>&1# This runs the disksuite-healthcheck script to check mirror status
15 6 * * 1,3,5 /tech/support/bin/disksuite-healthcheck.sh > /dev/null 2>&1
# If srmSUN data generation has not terminated, stop before starting new day0 0 * * * /var/adm/perfmgr/bin/terminate.srm ; /var/adm/perfmgr/bin/verify.srm
# Verify srmSUN data is still being generated25,55 * * * * /var/adm/perfmgr/bin/verify.srm
# Remove srmSUN data files older than 7 days0 1 * * * /var/adm/perfmgr/bin/clean.account
# Create srmSUN Single File for Data Transfer59 23 * * * /var/adm/perfmgr/bin/package.srm -z
1 2 * * * /etc/vsa/bin/dragnet >/dev/null 2>&1
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
#10 3 * * * /usr/lib/krb5/kprop_script ___slave_kdcs___
# This runs farmstat to collect history on currently running containers and also current capacity
#0,10,20,30,40,50 * * * * /cluster/private/fmon/fmon.sh
10 3 * * 0 /usr/lib/newsyslogModify zone configuration files as follows:
isdzone1# ln -s /usr/local/etc/sudoers /etc/sudoersisdzone1# cd /etc/cron.disdzone1# vi cron.allowrootsysisdzone1# chown root:sys cron.allowisdzone1# svcadm disable finger rlogin sma snmpdx wbemisdzone1# svcadm disable cde-calendar-manager cde-login cde-spcisdzone1# svcadm disable ftp telnetisdzone1# svcadm disable rstat shell:default cde-ttdbserver cde-printinfoisdzone1# exitCopy ISD NFS mounts to the vfstab of the container and create mountpoints
hp002# grep nfs /etc/vfstab >> /tech/zones/isdzone1/root/etc/vfstab hp002# vi /tech/zones/isdzone1/root/etc/vfstab(delete unnecessary shares)
hp002# mkdir -p /tech/zones/isdzone1/root/ETSDentbkup/logs hp002# mkdir /tech/zones/isdzone1/root/ISDoracleTest ssh login to the container using a Vintela ID. Test sudo, DNS, and NFS functionality.
hp002# ssh sp19223@isdzone1password:isdzone1$ sudo sudoshpassword:isdzone1# ping isdiptdevx01isdiptdevx01 is aliveisdzone1# mount /ETSDentbkupisdzone1# mount /ETSDentbkup/logsisdzone1# mount /ISDoracleSTOP HERE! UNTIL YOU HAVE CLEAN VSA HC SCAN.
Run the script join a Vintela domain.
Please consult the following ISD Wiki page for further instructions on the script Vintela Install hp002# cd /export/home hp002# cp -r uidadmin /tech/zones/isdzone1/root/export/home hp002# chown uidadmin:uidadmin /tech/zones/isdzone1/root/export/home/uidadminisdzone1# /nas/isd/Vintela/VAS_3_5_2_12/vasInstall.kshInstall BMC, Altiris, SRM, and Parity agents
NOTE: The BMC agent should not be installed in a container. May be able to look at the below script and just run the necessary code to install SRM and Parity.isdzone1# mkdir /opt/rscisdzone1# /net/isdistatus/tech/install/scripts/add-build-agents.sh