Thursday, April 14, 2011

Solaris Local Container Build Process

Local Container Build Process

NOTE: If building a Solaris 8 or Solaris 9 Branded container, see Container Farm Solaris 8/9 Container Build Process.

Zone Configuration and Initial Boot on first cluster node

If not already done, create parent directory for zone root mountpoints
 hp001# mkdir /tech/zones
 hp001# chmod 700 /tech/zones

Create the zone root mountpoint.
The zonepath should be the same as the chosen hostname of the zone, in this case isdzone1.
 hp001# cd /tech/zones
 hp001# mkdir isdzone1

Choose a free root 18Gb LUN by consulting the SAN
Native containers do not require much disk space, so an 18Gb LUN divided into 9Gb root volume and a 9Gb /tech volume will be adequate.
Update the Storage Tracking page to indicate which LUN was used.
If the LUN status from vxdisk list is nolabel, Use the Solaris format utility to label it.
 hp001# format

[ Select desired emcpower device and label it ]

Initalize a LUN for the zone root.
 hp001# vxdisk init <accessname> format=cdsdisk

If the error is raised,
VxVM vxdisk ERROR V-5-1-5433 Device emcpower5: init failed:Disk already initialized

then do the following:
 hp001# vxdisk destroy <accessname>
 hp001# vxdisk init <accessname> format=cdsdisk

Create a diskgroup, volumes and VXFS filesystems for the zone root.
Make the /tech volume 9Gb, and allocate the remainder of the LUN to /.
 hp001# vxdg init isdzone1_root <accessname>
 hp001# vxassist -g isdzone1_root make isdzone1_techvol 9000m
 hp001# vxassist -g isdzone1_root maxsize
Maximum volume size: 9566172 (9333Mb)
 hp001# vxassist -g isdzone1_root \
make isdzone1_rootvol <maxsize_value>
 hp001# mkfs -F vxfs /dev/vx/rdsk/isdzone1_root/isdzone1_rootvol
 hp001# mkfs -F vxfs /dev/vx/rdsk/isdzone1_root/isdzone1_techvol
 hp001# mount -F vxfs /dev/vx/dsk/isdzone1_root/isdzone1_rootvol /tech/zones/isdzone1

Verify that the container's root directory is mounted.
 hp001# cd /tech/zones/isdzone1
 hp001# df -h .
Filesystem size used avail capacity Mounted on
/dev/vx/dsk/isdzone1_root/isdzone1_rootvol
17G 18M 17G 1% /tech/zones/isdzone1

Set the permissions on the mounted mountpoint to 700:
 hp001# cd ..
 hp001# chmod 700 isdzone1

 hp001# ls -l
total 2
drwx------ 2 root root 512 Feb 17 12:46 isdzone1

Create a new zone configuration file.
The file is temporary and can be discarded after the zone has been created. Copy the contents of the example configuration file below into a text file on the server and edit as necessary.
create -b
set zonename=<zonename> (a)
set zonepath=/tech/zones/<zonename>
set autoboot=false (b)
set brand=native
set ip-type=shared
add fs
set dir=/tech
set special=/dev/vx/dsk/<zonename>_root/<zonename>_techvol
set raw=/dev/vx/rdsk/<zonename>_root/<zonename>_techvol
set type=vxfs
end
add fs
set dir=/etc/globalzone
set special=/etc/nodename
set type=lofs
set options=ro
end
add net
set physical=<public_nic> (d)
set address=<public-ip-address>
end
add net
set physical=<backup_nic> (d)
set address=<backup-ip-address>
end
set scheduling-class=FSS
set pool=<zonename> (i)
add capped-cpu
set ncpus=<number> (h)
end
set cpu-shares=<shares> (h)(e)
add capped-memory
set physical=<amount_of_memory> (h)(f)
set swap=<amount_of_memory> (f)(g)(h)
end
set max-shm-memory=<amount_of_memory> (h)(f)
verify
commit
exit


Notes on the zone configuration parameters above:

(a) The zonepath is the path in the global zone which contains the root of the zone. Normally, the zonepath will contain three subtrees: root, dev, and (when detached) zonecfg.xml
(b) Autoboot must be set to false because the bringup and shutdown of zones is controlled thru VCS.
(d) Since ip-type is Shared, this must be an interface which is already plumbed in the global zone. The instance name does not contain a colon and virtual instance. It is of the form nxge0, qge0, e1000g0, etc. All IP settings for shared-ip are controlled in the global zone.
(e) In the Hartford environment, the number of shares equals the number of CPU threads which this zone will be granted when the zone is contending for CPU cycles with other zones on the server. If there are free CPU resources, the zone can use them up to the limit of their CPU cap. See Container Farm Operations Guide for more information on Solaris resource controls.
(f) Memory quantities can be specified as bytes (numeric value), or by using suffixes k, m, or g. Example: 4000000000 or 4000m or 4g.
(g) The initial limit on swap utilization equal to the physical memory assigned to the container. This value can be changed based on the needs of the application in the container.
(h) The values of container resource controls are set based on the size of the container- small, medium, or large. See Container Farm Guidelines for the appropriate CPU and memory resource values.
(i) This value should be set to 'active' if using aggregated cpu pools. If using custom CPU pools the pool name needs to equal the zone name.
Configure the zone:
 hp001# zonecfg -z <zonename> -f <path_to_config_file>

Check the state of the newly created/configured zone
 hp001# zoneadm list -cv

ID NAME STATUS PATH
0 global running /
- isdzone1 configured /tech/zones/isdzone1

Install the configured zone.
 hp001# zoneadm -z isdzone1 install

Preparing to install zone
Creating list of files to copy from the global zone.
Copying <17000> files to the zone.
...
...
...
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1313> packages on the zone.
Initialized <1313> packages on zone.
Zone <isdzone1> is initialized.
The file <...> contains a log of the zone installation.

Verify the state of the zone, one more time
 hp001# zoneadm list -cv
ID NAME STATUS PATH
0 global running /
...
...
...
- isdzone1 installed /tech/zones/isdzone1
...
...
...

Create a sysidcfg file in the zone's /etc directory
Do this to avoid having to perform firstboot ID actions. This file can be copied from the /etc directory of an existing zone and the hostname entry changed to match the current zone:
 hp001# cd /tech/zones/isdzone1/root/etc
 hp001# vi sysidcfg

name_service=NONE
root_password=4n430ck
network_interface=PRIMARY{hostname=<hostname>}
keyboard=US-English
system_locale=C
terminal=vt100
security_policy=none
nfs4_domain=dynamic
timezone=US/Eastern

 hp001#

Boot the zone and monitor the initial boot process from the zone's console:
 hp001# zoneadm -z isdzone1 boot
 hp001# zlogin -C -e[ isdzone1
[Connected to zone 'isdzone1' console]
SunOS Release 5.10 Version Generic_141444-09 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: isdzone1
Loading smf(5) service descriptions: 151/151
...
...
...
rebooting system due to change(s) in /etc/default/init

[NOTICE: Zone rebooting]
...
...
...
isdzone1 console login: [.

[Connection to zone "isdzone1" console closed]
 hp001#

Shut down the zone and deport the diskgroup

 hp001# zoneadm -z isdzone1 halt
 hp001# cd /
 hp001# zoneadm -z isdzone1 detach
 hp001# umount /tech/zones/isdzone1
 hp001# vxdg deport isdzone1_root

Integration of Container into Veritas Cluster framework

Create a VCS service group for the container
NOTE: The name of the service group MUST be the same as the hostname of the container, to ensure proper operation of VCS triggers.If using build script-VCS5.0 vs 5.1

 hp001# haconf -makerw
 hp001# hagrp -add isdzone1
 hp001# hagrp -modify isdzone1 SystemList \
 hp001 1  hp002 2  hp003 3  hp004 4
 hp001# hagrp -modify isdzone1 PreOnline 1
 hp001# hagrp -modify isdzone1 FailOverPolicy Load
 hp001# hagrp -modify isdzone1 AutoStart 0
 hp001# hagrp -modify isdzone1 AutoStartPolicy Load
 hp001# hagrp -modify isdzone1 AutoStartList \
 hp001  hp002  hp003  hp004
 hp001# hagrp -modify isdzone1 AutoFailOver 0
 hp001# hagrp -modify isdzone1 AutoRestart 0
 hp001# hagrp -display isdzone1

Create a DiskGroup resource:

 hp001# hares -add isdzone1_dg DiskGroup isdzone1
 hp001# hares -modify isdzone1_dg DiskGroup isdzone1_root
 hp001# hares -modify isdzone1_dg Enabled 1

Add Mount resource for root filesystem in diskgroup and make it dependent on the diskgroup being online:

 hp001# hares -add isdzone1_mntroot Mount isdzone1
 hp001# hares -modify isdzone1_mntroot BlockDevice \
/dev/vx/dsk/isdzone1_root/isdzone1_rootvol
 hp001# hares -modify isdzone1_mntroot MountPoint /tech/zones/isdzone1
 hp001# hares -modify isdzone1_mntroot FSType vxfs
 hp001# hares -modify isdzone1_mntroot Enabled 1
 hp001# hares -modify isdzone1_mntroot FsckOpt %-y
 hp001# hares -probe isdzone1_mntroot -sys  hp001
 hp001# hares -link isdzone1_mntroot isdzone1_dg

Create the resource for the zone and make it dependent on the root filesystem being mounted

VCS5.1 hagrp –modify zonenamez ContainerInfo? Name zonename Type Zone Enabled 1

 hp001# hares -add isdzone1_zone Zone isdzone1
 hp001# hares -modify isdzone1_zone ZoneName isdzone1
 hp001# hares -modify isdzone1_zone Enabled 1
 hp001# hares -link isdzone1_zone isdzone1_mntroot

IF container is a production container, set it to automatically failover and start

Production containers will have the ability to restart in the event of an unscheduled node outage or when the cluster is first started.
Non-production containers are set to require manual intervention to start following failures or at cluster boot time.
 hp001# hagrp -modify isdzone1 AutoStart 1
 hp001# hagrp -modify isdzone1 AutoRestart 1
 hp001# hagrp -modify isdzone1 AutoFailOver 1

Commit the VCS changes to main.cf

 hp001# haconf -dump

Bring up the service group and verify that the storage is mounted and the zone has been booted

 hp001# hagrp -online isdzone1 -sys  hp001

 hp001# vxdg list
NAME STATE ID
isdzone1_root enabled,cds 1263496791.20. hp002
 hp001# vxprint -g isdzone1_root
TY NAME                    ASSOC            KSTATE     LENGTH   PLOFFS STATE TUTIL0 PUTIL0
dg isdzone1_root       isdzone1_root       -          -            -  -        -      -
dm c6t20d182           emcpower0s2         -          56554240     -  -        -       -
v isdzone1_rootvol     fsgen               ENABLED    36864000     -  ACTIVE   -       -
pl isdzone1_rootvol-01 isdzone1_rootvol    ENABLED    36864000     -  ACTIVE   -       -
sd c6t20d182-01        isdzone1_rootvol-01 ENABLED    36864000     0  -        -       -
v isdzone1_techvol     fsgen               ENABLED    18432000     -  ACTIVE   -       -
pl isdzone1_techvol-01 isdzone1_techvol    ENABLED    18432000     -  ACTIVE   -       -
sd c6t20d182-02        isdzone1_techvol-01 ENABLED    18432000     0  -        -       -

 hp001# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
2 isdzone1 running /tech/zones/isdzone1 native shared

 hp001# zlogin isdzone1
[Connected to zone 'isdzone1' pts/8]
Last login: Thu Jan 21 18:09:13 on pts/8
Sun Microsystems Inc. SunOS 5.10 Generic January 2005

isdzone1# df -h
Filesystem       size    used     avail   capacity  Mounted on
/               17G      1.1G      16G     7%       /
/dev            17G      1.1G      16G     7%       /dev
/lib            7.9G     4.5G      3.3G    58%      /lib
/platform       7.9G     4.5G      3.3G    58%      /platform
/sbin           7.9G     4.5G      3.3G    58%      /sbin
/tech           8.7G     8.8M      8.6G    1%       /tech
/usr            7.9G     4.5G      3.3G    58%      /usr
proc            0K       0K        0K      0%       /proc
ctfs            0K       0K        0K      0%       /system/contract
mnttab          0K       0K        0K      0%       /etc/mnttab
objfs           0K       0K        0K      0%       /system/object
swap            121G     352K      121G    1%       /etc/svc/volatile
.../libc_psr_hwcap2.so.1 7.9G 4.5G 3.3G    58%      /platform/sun4v/lib/libc_psr.so.1
.../libc_psr/libc_psr_hwcap2.so.1 7.9G 4.5G 3.3G 58% /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd              0K        0K         0K    0%       /dev/fd
swap            121G     32K       121G    1%       /tmp
swap            121G     32K       121G    1%       /var/run

isdzone1# exit
 hp001#

Save the current zone configuration to a file in preparation for copy to other cluster nodes:

 hp001# cd /cluster/private
 hp001# zonecfg -z isdzone1 export > isdzone1.cfg

Edit the saved zonecfg file to resolve a problem with parameters in the wrong order

 hp001# vi isdzone1.cfg

(move the three lines)
add capped-memory
set physical=<value>
end

(to just before the first 'add rctl' line in the file)

Configuration of zone on other cluster nodes

Note the switch to other cluster host for the following series of steps!

If not already done, create parent directory of zone root mountpoints
 hp002# mkdir /tech/zones
 hp002# chmod 700 /tech/zones

Create the zone root mountpoint. The final element of the zonepath should be the same as the chosen hostname of the zone, in this case isdzone1.
 hp002# cd /tech/zones
 hp002# mkdir isdzone1

Copy the zone configuration from the first node to a file on the second. Configure the zone on the second node and verify that the zone is in the "configured" state.
 hp002# zonecfg -z isdzone1 -f /cluster/private/isdzone1.cfg
 hp002# zoneadm list -cv
ID NAME STATUS PATH
0 global running /
- isdzone1 configured /tech/zones/isdzone1

Verify that the zone can be attached to the second node
 hp002# hagrp -switch isdzone1 -to  hp002

Repeat the steps in section on all remaining cluster nodes which will host this container.

Zone Postbuild Steps

Execute the following commands to copy Hartford-specific configuration files from the global zone to the local zone:
 hp002# cd /etc
 hp002# cp passwd shadow issue issue.ssh motd profile nsswitch.conf \
auto_master auto_home auto_nas .login /tech/zones/isdzone1/root/etc
 hp002# cp profile.no.direct.login.IDs /tech/zones/isdzone1/root/etc
 hp002# chmod 644 /tech/zones/isdzone1/root/etc/profile.no.direct.login.IDs
 hp002# cp ssh/sshd_config /tech/zones/isdzone1/root/etc/ssh

 hp002# cd /etc/default
 hp002# cp login passwd inetinit nfs /tech/zones/isdzone1/root/etc/default

 hp002# cd /etc/ftpd
 hp002# cp ftpusers /tech/zones/isdzone1/root/etc/ftpd

 hp002# cd /etc/security
 hp002# cp policy.conf /tech/zones/isdzone1/root/etc/security

 hp002# cd /etc/skel
 hp002# cp local* /tech/zones/isdzone1/root/etc/skel

 hp002# cd /
 hp002# cp .profile /tech/zones/isdzone1/root

 hp002# cd /tech/support/bin
 hp002# cp show-server-config.sh /tech/zones/isdzone1/root/tech/support/bin

 hp002# cd /tech/support/etc
 hp002# cp isd-release /tech/zones/isdzone1/root/tech/support/etc

 hp002# cd /opt
 hp002# find local | cpio -pdm tech/zones/isdzone1/root/opt

Install the VSA agent and reboot the container to register it.
 hp002# zlogin isdzone1
isdzone1# /net/isdistatus/tech/install/vsa/sun_vsa_install-1.ksh
isdzone1# exit
[Connection to zone isdzone1 pts/5 closed]
 hp002# hares -offline isdzone1_zone -sys `hostname`
 hp002# hares -online isdzone1_zone -sys `hostname`

Login to the zone and run the VSA scan. Examine the results of the VSA scan and remedy any violations found
 hp002# zlogin isdzone1
isdzone1# /etc/vsa/bin/dragnet -s
isdzone1# cd /var/adm
isdzone1# grep ^VIOL <hostname>.e-admin*

Disable ufsdumps
  • ufsdumps in containers do not work - container does not see its filesystem as UFS.
  • Edit root's crontab file and comment the ufsdump line.
  • Add a comment explaining that this has been explicitly done for a container
  • Crontab entry, when done, would look similar to this:
# Disabled ufsdumps and farmstat cannot run on container
15 3 * * 0 /usr/lib/fs/nfs/nfsfind
# The following line flushes the sendmail queue hourly
0 * * * * /usr/lib/sendmail -q
#*********************************************************************
#40 13 * * 3  /tech/support/bin/ufsdump_standard_nfs.ksh
#*********************************************************************
# This is the i-Status script run to the virtual server isdistatus which is either isdsunsc01 or isdsunsc02
15 22 * * 0 /net/isdistatus.thehartford.com/tech/apache/htdocs/i-Status/bin/init_current.config > /dev/null 2>&1
# This is the getconfig script run to the virtual server isdistatus which is either isdsunsc01 or isdsunsc02
45 22 * * 0 /net/isdistatus.thehartford.com/tech/apache/htdocs/server-config/bin/get-system-config.sh > /dev/null 2>&1
#*********************************************************************
# This cleans up /tech/core older than 7 days
0 5 * * * find /tech/core/* -a -mtime +7 -ls -exec rm {} \; >> /tech/support/logs/remove_corefiles.log 2>&1
# This runs the disksuite-healthcheck script to check mirror status
15 6 * * 1,3,5 /tech/support/bin/disksuite-healthcheck.sh > /dev/null 2>&1
#       If srmSUN data generation has not terminated, stop before starting new day
0 0 * * * /var/adm/perfmgr/bin/terminate.srm ; /var/adm/perfmgr/bin/verify.srm
#       Verify srmSUN data is still being generated
25,55 * * * * /var/adm/perfmgr/bin/verify.srm
#       Remove srmSUN data files older than 7 days
0 1 * * * /var/adm/perfmgr/bin/clean.account
#       Create srmSUN Single File for Data Transfer
59 23 * * * /var/adm/perfmgr/bin/package.srm -z
1 2 * * * /etc/vsa/bin/dragnet >/dev/null 2>&1
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
#10 3 * * * /usr/lib/krb5/kprop_script ___slave_kdcs___
# This runs farmstat to collect history on currently running containers and also current capacity
#0,10,20,30,40,50 * * * * /cluster/private/fmon/fmon.sh
10 3 * * 0   /usr/lib/newsyslog


Modify zone configuration files as follows:
isdzone1# ln -s /usr/local/etc/sudoers /etc/sudoers
isdzone1# cd /etc/cron.d
isdzone1# vi cron.allow
root
sys

isdzone1# chown root:sys cron.allow

isdzone1# svcadm disable finger rlogin sma snmpdx wbem
isdzone1# svcadm disable cde-calendar-manager cde-login cde-spc
isdzone1# svcadm disable ftp telnet
isdzone1# svcadm disable rstat shell:default cde-ttdbserver cde-printinfo

isdzone1# exit

Copy ISD NFS mounts to the vfstab of the container and create mountpoints
 hp002# grep nfs /etc/vfstab >> /tech/zones/isdzone1/root/etc/vfstab
 hp002# vi /tech/zones/isdzone1/root/etc/vfstab

(delete unnecessary shares)

 hp002# mkdir -p /tech/zones/isdzone1/root/ETSDentbkup/logs
 hp002# mkdir /tech/zones/isdzone1/root/ISDoracle

Test ssh login to the container using a Vintela ID. Test sudo, DNS, and NFS functionality.
 hp002# ssh sp19223@isdzone1
password:
isdzone1$ sudo sudosh
password:
isdzone1# ping isdiptdevx01
isdiptdevx01 is alive
isdzone1# mount /ETSDentbkup
isdzone1# mount /ETSDentbkup/logs
isdzone1# mount /ISDoracle


STOP HERE! UNTIL YOU HAVE CLEAN VSA HC SCAN.

Run the script join a Vintela domain.
Please consult the following ISD Wiki page for further instructions on the script Vintela Install

 hp002# cd /export/home
 hp002# cp -r uidadmin /tech/zones/isdzone1/root/export/home
 hp002# chown uidadmin:uidadmin /tech/zones/isdzone1/root/export/home/uidadmin
isdzone1# /nas/isd/Vintela/VAS_3_5_2_12/vasInstall.ksh


Install BMC, Altiris, SRM, and Parity agents
NOTE: The BMC agent should not be installed in a container. May be able to look at the below script and just run the necessary code to install SRM and Parity.
isdzone1# mkdir /opt/rsc
isdzone1# /net/isdistatus/tech/install/scripts/add-build-agents.sh


M5000 XSCF CONSOLE ACCESS


SUN M5000 server

SUN's M500 server has a new management interface card called XSCF -- it's vastly different from the sc interface of the other ilo products.

Here are my notes from setting up our system. Note that ip and MAC addresses are used for example purpose.

rebootxscf
- reboots the xscf system

XSCF> console -d 0
XSCF> showstatus
XSCF> showversion -c xcp -v [shows xcp firmware, version, openboot prom version
XSCF> showenvironment
XSCF> showenvironment temp
XSCF> showenvironment volt
XSCF> showhardconf
XSCF> showdcl -va [check domain id...]
XSCF> showdomainstatus -a
XSCF> showboards -a
XSCF> poweron -a [powers up all domains]
XSCF> poweroff -a [powers off all domains]
XSCF> poweron -d 0 [powers on domain 0]
XSCF> poweroff -d 0 [powers off domain 0]
XSCF> poweroff -f -d 0 [forces a power off domain 0]
XSCF> reset -d 0 por [resets domain 0]
XSCF> reset -d 0 xir [resets domain 0 with XIR reset]
XSCF> sendbreak -d 0 [sends break command to domain 0]
XSCF> setautologout -s 60 [sets autologout to 60 minutes]
XSCF> showautologout
XSCF> shownetwork -a
XSCF> setnetwork xscf#0-lan#0 -m 255.255.255.0 10.10.10.5
XSCF> sethostname xscf#0 fire-xscf
XSCF> sethostname -h host.org
XSCF> setroute -h host.org
XSCF> setnameserver 10.10.10.2 10.10.10.3
XSCF> setroute -c add -n 10.10.10.1 -m 255.255.255.0 xscf#0-lan#0

--------------------

I boot from a SAN, so here are the SAN FC disks:

{8} ok show-disks
a) /pci@2,600000/QLGC,qlc@0,1/fp@0,0/disk
b) /pci@2,600000/QLGC,qlc@0/fp@0,0/disk
q) NO SELECTION
Enter Selection, q to quit:

ok nvalias mydev /pci@2,600000/QLGC,qlc@0,1/fp@0,0/disk

{8} ok show-disks
a) /pci@2,600000/QLGC,qlc@0,1/fp@0,0/disk
b) /pci@2,600000/QLGC,qlc@0/fp@0,0/disk
q) NO SELECTION
Enter Selection, q to quit: b
/pci@2,600000/QLGC,qlc@0/fp@0,0/disk has been selected.
Type ^Y ( Control-Y ) to insert it in the command line.
e.g. ok nvalias mydev ^Y
for creating devalias mydev for /pci@2,600000/QLGC,qlc@0/fp@0,0/disk
{8} ok nvalias mydev /pci@2,600000/QLGC,qlc@0/fp@0,0/disk
{8} ok boot mydev - install
Boot device: /pci@2,600000/QLGC,qlc@0/fp@0,0/disk File and args: - install
QLogic QLE2462 Host Adapter Driver(SPARC): 1.17 03/31/06


-------------------------

Need to make a system snapshot for diagnostic purposes? Use this command:

snapshot -l -v -p xxxxxxxxx -t me@myhost.host.org:/tmp

-----------------

ok> boot mydev - install

ok> watch-net-all

ok> show-nets

XSCF> showhardconf

-----------------------

{8} ok devalias net /pci@3,700000/network@0,1
{8} ok devalias
net /pci@3,700000/network@0,1
san /pci@2,600000/QLGC,qlc@0,1/fp@0,0/disk
name aliases
{8} ok boot net - install
Boot device: /pci@3,700000/network@0,1 File and args: - install
1000 Mbps full duplex Link up
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
1000 Mbps full duplex Link up
Requesting Internet Address for 0:15:36:3c:b7:65
1000 Mbps full duplex Link up
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65

{8} ok nvunalias net
{8} ok nvunalias net1
{8} ok set-defaults
Setting NVRAM parameters to default values.

{8} ok reset-all
Resetting...

{8} ok devalias net /pci@3,700000/network@0
{8} ok boot net - install
Boot device: /pci@3,700000/network@0 File and args: - install
1000 Mbps full duplex Link up
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
1000 Mbps full duplex Link up
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65

XSCF> sendbreak -d 1
Send break signal to DomainID 1?[y|n] :y
XSCF> reset -d 1 xir
DomainID to reset:01
Continue? [y|n] :y
01 :Reset

*Note*
This command only issues the instruction to reset.
The result of the instruction can be checked by the "showlogs power".

=----------------------

{8} ok cd /pci@3,700000/network@0
{8} ok ./properties
./properties ?
{8} ok .properties
status okay
assigned-addresses 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000
phy-type mif
board-model 501-7289
version Sun PCI-E 1G Ethernet UTP Adapter FCode 1.10 06/11/02
model SUNW,pcie-northstar
d-fru-len 00000000
d-fru-off 00000000
d-fru-dev eeprom
s-fru-len 00000000
s-fru-off 00000000
s-fru-dev eeprom
compatible pciex8086,105e.108e.125e.6
pciex8086,105e.108e.125e
pciex108e,125e
pciex8086,105e.6
pciex8086,105e
pciexclass,020000
pciexclass,0200
reg 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000
max-frame-size 00010000
address-bits 00000030
device_type network
name network
local-mac-address 0:15:36:3c:b7:65
fcode-rom-offset 0000e000
interrupts 00000001
cache-line-size 00000010
class-code 00020000
subsystem-id 0000125e
subsystem-vendor-id 0000108e
revision-id 00000006
device-id 0000105e
vendor-id 00008086

{8} ok nvalias net /pci@3,700000/network@0
{8} ok devalias
net /pci@3,700000/network@0
disk /pci@2,600000/QLGC,qlc@0,1/fp@0,0/disk
name aliases

{8} ok boot net - install
Boot device: /pci@3,700000/network@0 File and args: - install
1000 Mbps full duplex Link up
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
1000 Mbps full duplex Link up
Requesting Internet Address for 0:15:36:3c:b7:65
Requesting Internet Address for 0:15:36:3c:b7:65
1000 Mbps full duplex Link up
1000 Mbps full duplex Link up

Requesting Internet address for 0:15:36:3c:b7:65
SunOS Release 5.10 Version Generic_120011-14 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.



------------
Need to turn off the IP filter firewall?

# svcs | grep ip
online Nov_21 svc:/network/ipfilter:default
# svcadm disable ipfilter
# svcs | grep ip

---------------

added 2 additional memory boards:

XSCF> addboard -c assign -d 0 00-2
XSCF> addboard -c assign -d 1 00-3

XSCF> showboards -va

-----------------

To disable secure mode in the console (which disables the break command):

XSCF> setdomainmode -d 0 -m secure=off
Diagnostic Level :min -> -
Secure Mode :on -> off
Autoboot :on -> -
The specified modes will be changed.
Continue? [y|n] :y
configured.
Diagnostic Level :min
Secure Mode :off (host watchdog: unavailable Break-signal:receive)
Autoboot :on (autoboot:on)

XSCF> sendbreak -y -d 0
Send break signal to DomainID 0?[y|n] :y

System now sits at the OK prompt:

Type 'go' to resume
{0} ok boot cdrom
Resetting...
POST Sequence 01
[.]
POST Sequence Complete.

Sun SPARC Enterprise M4000 Server, using Domain console
Copyright 2007 Sun Microsystems, Inc. All rights reserved.
Copyright 2007 Sun Microsystems, Inc. and Fujitsu Limited. All rights reserved.
OpenBoot 4.24.4, 32768 MB memory installed, Serial #3333333.
Ethernet address 0:15:36:3c:b7:65, Host ID: 99999999.

Rebooting with command: boot cdrom
Boot device: /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/disk@3,0:f File and args:
SunOS Release 5.10 Version Generic_120011-14 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.