Some items concerning Solaris 10.
Zone: chrooted “virtual” machine. Kernel is a shared resource.
Domain: Grouping of hardware in larger sun servers.
Partition: Separation of domains.
Container: Zone with resource controls in place.
/etc/zones contains data on all zones.
ipmp auto nic failover. Both must be in same subnet.
core: application failure
panic: kernel failure
pool stuff (page 2-34):
pools contain sets (dynamic processor sets)
dynamic resource pool: limit resources in a zone
poolcfg -dc info get info on pools
pooladm -e save active pool config /etc/pooladm.conf
pooladm -x removes all user configured pools
projadd and projmod to limit memory (page 2-41)
rcapstat to monitor memory caps.
preap – remove zombie processes.
see page 1-13 for script.
zonecfg -z to configure zones. see page 1-23.
zonecfg -z info show info on zone.
zoneadm list -iv list zones and show status.
zoneadm -z install install zone (copy files).
zoneadm -z boot boot the zone.
zoneadm -z [halt | reboot]
zlogin -S init 0 send command to zone w/o login
zlogin -C virtual serial console
~. disconnect from virtual console (may blow you completely out)
~~. to disconnect from virtual console
zoneadm -z uninstall uninstalls a zone (configuration is left intact)
zonecfg -z delete removes zone configuration complete (make backups)
prstat -Z show cput/mem utilization on zones (including Global)
shuts down components. healing by amputation.
Fault Management Architecture
CE – Correctable Error
UE – Uncorrectable Error (box will panic)
FMRI – Fault Managed Resource Identifier
cpu – central processing unit
mem – system main memory
mod – model
pkg – packages
hc – hardware component managed by FMA
legacy-hc – legacy hardware component
fmd – diagnosis engine which is part of FMA
dev – solaris device path status and properties
svc – application service managed by the service management facility
zfs – ZFS filesystem
CxTxDxSx (disk device layout)
Tx = 6
Dx = 0
Sx = g
starting at 0. A=1, B=2, etc.. G=6
CxT6D0s6 is the result.
Unsolved – has no list.suspect
Solved – list.suspect is published
Closed – false alarm or service restart or HC turned off
fmadm config – shows info
fmadm faulty – shows things that system thinks faulty
fmadm repair – after replacing part, this *MUST* be run
fmstat – statistics about FMA modules
snmp info on fmd pages 4-20 – 4-22
Service Management Facility:
svcs |more – gives list and status of services
svcadm enable – starts service, persistent through reboot
svcadm disable – stops service, will not restart after reboot
svcadm restart – will attempt to stop and start a service
use /var/svc/manifest/site for homegrown service manifests
online – service is online and running
offline – dependencies were not met, no start attempt made
maintenance – start attempted, but service failed to start
Brian’s Sun Class stuff:
svcprop shows service settings in the database
svcs -a shows all services
svcs -p “*nfs” would show all processes associated with nfs
svcs -d list services this service depends on
svcs -D lists services which depend on this service
svcs -l extended info on service
/var/svs/log contains logs for each service
inetadm lists services under inetd control
-e enable service
-d disable service
-p show service properties
svccfg import /path/to/xml imports manifest into the SMF repository
inetconv reads inetd.conf, imports them into the SMF repository
restores a corrupt repository
svccfg validate /path/to/my.xml validates manifest
more research on dtrace is needed
rbac (role based access control) research as well
download scat (solaris crash analysis tool)
solaris fingerprint database
fcinfo hba-port reports WWNs of fiber cards
scp stalls on x86 Solaris:
This should be added to /kernel/drv/e1000g.conf
Add this to the e1000g.conf and reboot
Following disk io:
iostat -zxtcT d 1
ZFS mapping ssd# to device name:
cat /etc/path_to_inst and find your ssd #
copy the scsi entry (EX: “/scsi_vhci/ssd@g60060e8004a4c3000000a4c30000000e”)
ls -l /dev/dsk | fgrep “/scsi_vhci/ssd@g60060e8004a4c3000000a4c30000000e”