Tag Archives: zone

Bind9: Master Only

Configuration for a master only DNS server.

1. WILL NOT answer queries
2. WILL NOT forward queries
3. WILL NOT perform recursion
4. WILL allow transfers from specified slaves

Zone and configuration files are backed up disk to disk via rsync.

Single point editing of our name space.

Single point of failure. If server is lost, updates to DNS cannot be made until another master is brought online.

options {
directory “/etc”;
pid-file “/var/run/named.pid”;
version “Windows 3.11”;
allow-query {“none”; };
allow-recursion {“none”; };
notify yes;
also-notify {
IPn.IPn.IPn.IPn;
};
allow-transfer {
IPn.IPn.IPn.IPn;
};
};

zone “my.hosts.net” {
type master;
file “/etc/my.hosts.net”;
};

Simple Zone Construction

bash-3.00# zoneadm list -iv
ID NAME STATUS PATH
0 global running /
11 foo running /export/zones/foo
13 bar running /export/zones/bar
bash-3.00# zonecfg -z fubar
fubar: No such zone configured
Use ‘create’ to begin configuring a new zone.

zonecfg:fubar> create
zonecfg:fubar> set zonepath=/export/zones/fubar
zonecfg:fubar> set autoboot=true
zonecfg:fubar> add net
zonecfg:fubar:net> set physical=eth0
zonecfg:fubar:net> set address=192.168.1.1
zonecfg:fubar:net> end
zonecfg:fubar> add attr
zonecfg:fubar:attr> set name=comment
zonecfg:fubar:attr> set type=string
zonecfg:fubar:attr> set value=”FOOBED”
zonecfg:fubar:attr> end
zonecfg:fubar> verify
zonecfg:fubar> commit
zonecfg:fubar> exit

bash-3.00# zoneadm -z fubar install

Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <2434> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <980> packages on the zone.
Initialized <980> packages on zone.
Zone is initialized.
Installation of these packages generated warnings:
The file contains a log of the zone installation.

bash-3.00# zoneadm -z fubar boot

bash-3.00# zlogin -C fubar

Solaris 10: Zones

This is the first part in a series of notes taken regarding new(ish) Solaris 10 technologies. Other items I have notes on are ZFS and new service administration.

Zones, Containers, Domains and Partition (According to Sun):

Zone: chroot’d virtual machine. Some resources are shared, for example, the kernel or /usr/lib.
More info below.

Container: Zone with resource controls in place. At this time, limited to number of CPUs.
See “Resource Pools”

Domain: Grouping of hardware in enterprise class Sun servers

Partition: Segregation of domain grouped hardware.

Non-Global Zones (NGZ) can either be Sparse Root Model (/lib, /platform, /sbin, and /usr are linked from the Global Zone) or the Whole Root Model

Monitoring Zones:
prstat -Z show cpu/mem utilization on zones (including Global
rcapstat monitor memory caps
poolcfg -dc info get info on pools
zoneadm list -iv list zones and show status
zonecfg -z info show info on a zone

Resource Allocation (Resource Capping Daemon):
pooladm -e save active pool config in /etc/pooladm.conf
pooladm -x removes all user configured pools
projadd and projmod to limit memory

Zone creation and destruction:
zonecfg -z to configure zones
zoneadm -z uninstall uninstalls a zone (configuration is left intact)
zonecfg -z delete removes zone configuration completely (make backups)
zoneadm -z install install zone (copy files)

Zone Interaction (From the Global Zone):
zlogin -C virtual serial console
zlogin -S send command to zone w/o login
zoneadm -z boot boot the zone
zoneadm -z [halt | reboot]

Miscellaneous Zone Stuff:
/etc/zones contains data on all configured zones
Dynamic resource pools allow limiting of resources a zone can use
~. disconnect from virtual console (may blow you completely out)
~~. to disconnect from virtual console (use this if the above doesn’t work correctly)
NGZ’s cannot be an NFS Server currently.

Some of the resource management comments may seem to contradict each other. I will clarify these statements as I implement resource controls.

Solaris 10 Crib

Some items concerning Solaris 10.

Zone: chrooted “virtual” machine. Kernel is a shared resource.
Domain: Grouping of hardware in larger sun servers.
Partition: Separation of domains.
Container: Zone with resource controls in place.

/etc/zones contains data on all zones.

ipmp auto nic failover. Both must be in same subnet.

core: application failure
panic: kernel failure

pool stuff (page 2-34):
pools contain sets (dynamic processor sets)
dynamic resource pool: limit resources in a zone
poolcfg -dc info get info on pools
pooladm -e save active pool config /etc/pooladm.conf
pooladm -x removes all user configured pools

memory capping:
projadd and projmod to limit memory (page 2-41)
rcapstat to monitor memory caps.

preap – remove zombie processes.

zone stuff:
see page 1-13 for script.
zonecfg -z to configure zones. see page 1-23.
zonecfg -z info show info on zone.
zoneadm list -iv list zones and show status.
zoneadm -z install install zone (copy files).
zoneadm -z boot boot the zone.
zoneadm -z [halt | reboot]
zlogin -S init 0 send command to zone w/o login

zlogin -C virtual serial console
~. disconnect from virtual console (may blow you completely out)
~~. to disconnect from virtual console

zoneadm -z uninstall uninstalls a zone (configuration is left intact)
zonecfg -z delete removes zone configuration complete (make backups)

prstat -Z show cput/mem utilization on zones (including Global)

Predictive Self-Healing:
shuts down components. healing by amputation.

FMA:
Fault Management Architecture

CE – Correctable Error
UE – Uncorrectable Error (box will panic)
FMRI – Fault Managed Resource Identifier
cpu – central processing unit
mem – system main memory
mod – model
pkg – packages
hc – hardware component managed by FMA
legacy-hc – legacy hardware component
fmd – diagnosis engine which is part of FMA
dev – solaris device path status and properties
svc – application service managed by the service management facility
zfs – ZFS filesystem

CxTxDxSx (disk device layout)

FMRI sd@6,0:g

Tx = 6
Dx = 0
Sx = g

starting at 0. A=1, B=2, etc.. G=6

CxT6D0s6 is the result.

FMD states:

Unsolved – has no list.suspect
Solved – list.suspect is published
Closed – false alarm or service restart or HC turned off

fmadm config – shows info
fmadm faulty – shows things that system thinks faulty
fmadm repair – after replacing part, this *MUST* be run
fmstat – statistics about FMA modules

snmp info on fmd pages 4-20 – 4-22

Service Management Facility:
svcs |more – gives list and status of services
svcadm enable – starts service, persistent through reboot
svcadm disable – stops service, will not restart after reboot
svcadm restart – will attempt to stop and start a service

use /var/svc/manifest/site for homegrown service manifests

online – service is online and running
offline – dependencies were not met, no start attempt made
maintenance – start attempted, but service failed to start

Brian’s Sun Class stuff:
http://www.my-speakeasy.com/sunstuff/SA225/

svcprop shows service settings in the database
svcs -a shows all services
svcs -p “*nfs” would show all processes associated with nfs
svcs -d list services this service depends on
svcs -D lists services which depend on this service
svcs -l extended info on service

/var/svs/log contains logs for each service

inetadm lists services under inetd control
-e enable service
-d disable service
-p show service properties

svccfg import /path/to/xml imports manifest into the SMF repository

inetconv reads inetd.conf, imports them into the SMF repository

/lib/svc/bin/restore_repository
restores a corrupt repository

svccfg validate /path/to/my.xml validates manifest

more research on dtrace is needed
rbac (role based access control) research as well

download scat (solaris crash analysis tool)

solaris fingerprint database

fcinfo hba-port reports WWNs of fiber cards

scp stalls on x86 Solaris:

This should be added to /kernel/drv/e1000g.conf
Add this to the e1000g.conf and reboot

lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;

Following disk io:
iostat -zxtcT d 1

ZFS mapping ssd# to device name:
cat /etc/path_to_inst and find your ssd #
copy the scsi entry (EX: “/scsi_vhci/ssd@g60060e8004a4c3000000a4c30000000e”)
ls -l /dev/dsk | fgrep “/scsi_vhci/ssd@g60060e8004a4c3000000a4c30000000e”