Jump to content

Solaris: Difference between revisions

From Smithnet Wiki
 
(6 intermediate revisions by the same user not shown)
Line 22: Line 22:
  pkg install python-39
  pkg install python-39
  pkg uninstall something
  pkg uninstall something
   
 
== Swap ==
 
Show usage:
  swap -l
 
Show swap volume:
zfs get volsize rpool/swap
 
Set swap volume size:
zfs set volsize=4G rpool/swap
 
=== MySQL ===
=== MySQL ===


Line 120: Line 131:
Enable a service:
Enable a service:
  svcadm enable apache24
  svcadm enable apache24
=== Timezone ===
Show:
svccfg -s timezone:default listprop timezone/localtime
Set:
svccfg -s timezone:default setprop timezone/localtime = astring: "Europe/Amsterdam"
See also files in: /usr/share/lib/zoneinfo


=== User Management ===
=== User Management ===
Line 235: Line 256:
* reboot
* reboot


== Resource Pools ==
== ZFS ==


Disks can be listed and formatted with:
Disks can be listed and formatted with:
Line 250: Line 271:
  zpool attach rpool c1t0d0 c1t1d0
  zpool attach rpool c1t0d0 c1t1d0


Then, monitor "zpool status rpool" output until "resilvering" is complete. Install boot blocks to the second disk:
Then, monitor "zpool status rpool" output until "resilvering" is complete.
installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0


=== Pools and Datasets ===


Pools can be made from disk devices or files.


====
Create a new pool from mulitple devices in a RAID (high level RAID: raidz2, raidz3):
Create a new pool from one device (file, or disk device):
  zpool create store raidz1 c1t2d0 c1t3d0 c1t4d0 c1t5d0
  zpool create pool1 /root/disk1
  zpool list store
  zpool list pool1
  zfs list store
  zfs list pool1


Add a second disk, and zfs capacity expands automatically:
Check for errors:
  zpool add pool1 /root/disk2
  zpool scrub store
 
Pools are, by default, mounted under root, eg /store. No entry in fstab is required. Use the -m option on create to change this.


Remove a pool:
Remove a pool:
  zpool destroy pool1
  zpool destroy pool1


Create a mirror:
Remove a disk:
  zpool create pool1 mirror /root/disk1 /root/disk2
  zpool detatch pool1 /root/disk1
 
A dataset (like a subvolume) can be created in a pool, and can be nested:
zfs create store/database
 
Options can be passed on create, eg to change the mountpoint, etc:
-o mountpoint=/database -o compression=gzip -o quota=100G -o reservation=50G


Check for errors:
=== Quotas and Reservations ===
  zpool scrub pool1
 
Quotas can be defined on datasets, such that no more than this amount of storage can be used. However, this limit may not be achieved if the pool's alocation is used up outside of this dataset. An reservation can be made from the pool so only this pool can use it.
 
View and set quota:
  zfs get quota store/database
zfs set quota=10G store/database
 
View and set reservation:
zfs get reservation store/database
zfs set reservation=10G store/database
 
=== Volume (Block Device) ===


Remove a disk:
A block device can be created from a pool, eg for use by a virtual machine, dump device or swap.
  zpool detatch pool1 /root/disk1
  zfs create -V 5G store/vol


Add a new disk ("silver" the mirror disk2 > 1):
will expose the device as /dev/zvol/dsk/store/vol
zpool attach pool1 /root/disk2 /root/disk1


Make a bigger RAID:
Volumes can be used as iSCSI targets, see [https://docs.oracle.com/cd/E19253-01/819-5461/gaypf/index.html here].
zpool create pool1 raidz /root/disk1 /root/disk2 /root/disk3 /root/disk4


== Role Based Authentication ==
== Role Based Authentication ==

Latest revision as of 07:53, 6 June 2025

Installation

  • Oracle CBE (Common Build Environmnet) : Not for production
    • eg: 11.4-11.4.42.0.0.111.0
  • SRU (Support Repository Update) for production
    • eg: 11.4.11.4.42.0.1.113.1

see in /etc/os-release

CBE does not install a desktop. To do this after a text install, change the repository location:

pkg set-publisher -G'*' -g http://pkg.oracle.com/solaris/release/ solaris

Check the online package, then install:

pkg info -r solaris-desktop
pkg install solaris-desktop

Packages

Search for available gcc package, then install:

pkg search gcc | grep "C++ Compiler"
pkg install gcc-c++
pkg install python-39
pkg uninstall something

Swap

Show usage:

swap -l

Show swap volume:

zfs get volsize rpool/swap

Set swap volume size:

zfs set volsize=4G rpool/swap

MySQL

Install and start:

pkg install mysql
svcadm enable mysql

VirtualBox

pkg install runtime/python-39

Uninstall old version, install new:

pkgrm SUNWvbox
pkgadd -d VirtualBox-7.1.8-SunOS-amd64-r168469.pkg

Start:

General

Booting: x86

Into single-user mode:

  • In grub menu, edit entry
  • On $multiboot line, add "-s" to end
  • CTRL-X to boot

Show Grub boot options:

bootadm list-menu

Set default menu option to second one:

bootadm set-menu default=1

Set the timeout:

bootadm set-menu timeout=10

Booting: OpenBoot

  • ok> prompt: STOP-A or BRK
banner
reset-all
probe-ide
probe-scsi
devaliases
printenv boot-device
setenv boot-device disk
reset

Package Management

Show package publisher:

pkg publisher

Show us only the packages for which newer versions are available:

pkg info -u

Update:

pkg update

Show SRU installed (look at Branch and Packaging Date):

pkg info entire

Search for a package matching "ucb":

# pkg search ucb
INDEX      ACTION VALUE                                  PACKAGE
basename   file   usr/share/groff/1.22.3/font/devlj4/UCB pkg:/text/[email protected]
basename   dir    usr/ucb                                pkg:/legacy/compatibility/[email protected]
pkg.fmri   set    solaris/compatibility/ucb              pkg:/compatibility/[email protected]
pkg.fmri   set    solaris/legacy/compatibility/ucb       pkg:/legacy/compatibility/[email protected]

# pkg install pkg:/compatibility/[email protected]

Services

List all enabled services (-a also shows disabled):

svcs

Show long list about one service:

# svcs -l apache24
fmri         svc:/network/http:apache24
name         Apache 2.4 HTTP server
enabled      true
state        online
next_state   none
state_time   Mon Nov 12 16:22:58 2018
logfile      /var/svc/log/network-http:apache24.log
restarter    svc:/system/svc/restarter:default
contract_id  2017
manifest     /lib/svc/manifest/network/http-apache24.xml
dependency   optional_all/error svc:/system/filesystem/autofs:default (online)
dependency   require_all/none svc:/system/filesystem/local:default (online)
dependency   require_all/error svc:/milestone/network:default (online)

Enable a service:

svcadm enable apache24

Timezone

Show:

svccfg -s timezone:default listprop timezone/localtime

Set:

svccfg -s timezone:default setprop timezone/localtime = astring: "Europe/Amsterdam"

See also files in: /usr/share/lib/zoneinfo

User Management

To give user ability to su to root:

  • /etc/user_attr.d/local-entries

To show status and unlock:

passwd -s
passwd -u someuser

To stop account lockout:

usermod -K lock_after_retries=no someuser

iSCSI initiator (Static)

Check initiator service is up:

svcs network/iscsi/initiator

Add IP of storage system (use default port 3260):

iscsiadm add static-config iqn.2000-01.com.example:initiator01, 192.0.2.2:3260

Check targets:

iscsiadm list static-config

CHAPS enable:

iscsiadm modify initiator-node --authentication CHAP

Set user, and secret (password):

iscsiadm modify initiator-node --CHAP-name someuser
iscsiadm modify initiator-node --CHAP-secret
 Enter CHAP secret: ************
 Re-enter secret: ************

Enable:

iscsiadm modify discovery --static enable

Show initiator status:

iscsiadm list initiator-node
iscsiadm list target
iscsiadm list target-param -v

Show iSCSI disks:

iscsiadm list target -S | grep "OS Device Name"


See also: Oracle Docs

Kerberos

Client: kclient

Networking

networking

Check status:

dladm show-link
dladm show-ether

Show hostname:

svccfg -s system/identity:node listprop config

Set hostname:

svccfg -s system/identity:node setprop config/nodename="my-sol-host"
svccfg -s system/identity:node setprop config/loopback="localhost

NTP

Client:

cd /etc/inet; cp ntp.client > ntp.conf

(edit file)

svcadm enable ntp
svcadm restart ntp

Reset root password

  • Boot from CD
  • Select option 3: Shell

Check availability of rpool (none expected):

zpool list

Import rpool:

zpool import -f -R /a rpool

df -h should show some filesystems under /a

Show zfs filesystems, check for root/ROOT/...

zfs list

Set mount point for root filesystem:

zfs set mountpoint=/mnt_tmp rpool/ROOT/11.4-11.4.31.0.1.88.5

Check new entry under /mnt/tmp has been added:

zfs list

Mount filesystem:

zfs mount rpool/ROOT/11.4-11.4.31.0.1.88.5

Remove password hash from /a/mnt_tmp/etc/shadow

Reset mount point:

zfs umount rpool/ROOT/11.4-11.4.31.0.1.88.5
zfs set mountpoint=/ rpool/ROOT/11.4-11.4.31.0.1.88.5
zpool export rpool
  • Reboot server
  • edit grub menu ("e")
  • on line starting $multiboot, append "-s" option for single-user mode
  • enter "root" and once in shell, change root password
  • reboot

ZFS

Disks can be listed and formatted with:

format

Will show at least the root pool (rpool):

zpool list
zpool status rpool

Show zfs file systems:

zfs list

The default root pool (rpool) is a single disk immediately after install (eg c1t0d0). Add a second disk (c1t1d0) to make a mirror:

zpool attach rpool c1t0d0 c1t1d0

Then, monitor "zpool status rpool" output until "resilvering" is complete.

Pools and Datasets

Pools can be made from disk devices or files.

Create a new pool from mulitple devices in a RAID (high level RAID: raidz2, raidz3):

zpool create store raidz1 c1t2d0 c1t3d0 c1t4d0 c1t5d0
zpool list store
zfs list store

Check for errors:

zpool scrub store

Pools are, by default, mounted under root, eg /store. No entry in fstab is required. Use the -m option on create to change this.

Remove a pool:

zpool destroy pool1

Remove a disk:

zpool detatch pool1 /root/disk1

A dataset (like a subvolume) can be created in a pool, and can be nested:

zfs create store/database

Options can be passed on create, eg to change the mountpoint, etc:

-o mountpoint=/database -o compression=gzip -o quota=100G -o reservation=50G

Quotas and Reservations

Quotas can be defined on datasets, such that no more than this amount of storage can be used. However, this limit may not be achieved if the pool's alocation is used up outside of this dataset. An reservation can be made from the pool so only this pool can use it.

View and set quota:

zfs get quota store/database
zfs set quota=10G store/database

View and set reservation:

zfs get reservation store/database
zfs set reservation=10G store/database

Volume (Block Device)

A block device can be created from a pool, eg for use by a virtual machine, dump device or swap.

zfs create -V 5G store/vol

will expose the device as /dev/zvol/dsk/store/vol

Volumes can be used as iSCSI targets, see here.

Role Based Authentication

List profiles for a user:

profiles -l user1

Create a new profile (local files, not LDAP):

profile -p ChangePasswords -S files
> set desc="Allow changing of passwords"
> set auth=solaris.passwd.assign,solaris.account.activate
> info
> verify
> exit

Update a user to be assigned the new profile:

usermod +P ChangePasswords user1

Profiles are stored locally in:

  • /etc/security/prof_attr

Zones

Oracle Docs:

Check zfs:

zfs list | grep zones

Configuring a zone:

root@npgs-solaris:~# zonecfg -z zone1
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
create: Using system default template 'SYSdefault'
zonecfg:zone1> set autoboot=true
zonecfg:zone1> set bootargs="-m verbose"
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
root@npgs-solaris:~#

List config:

zonecfg -z z2 info
root@npgs-solaris:~# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   - zone1            configured  /system/zones/zone1          solaris    excl

Install zone:

root@npgs-solaris:~# zoneadm -z zone1 install
The following ZFS file system(s) have been created:
    rpool/VARSHARE/zones/zone1
Progress being logged to /var/log/zones/zoneadm.20181109T163221Z.zone1.install
       Image: Preparing at /system/zones/zone1/root.

Install Log: /system/volatile/install.25403/install_log
 AI Manifest: /tmp/manifest.xml.5c4vcb
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: zone1
Installation: Starting ...
          Creating IPS image
Startup linked: 1/1 done
        Installing packages from:
            solaris
                origin:  http://pkg.oracle.com/solaris/release/
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            415/415   65388/65388  428.2/428.2  507k/s

 PHASE                                          ITEMS
Installing new actions                   89400/89400
Updating package state database                 Done
Updating package cache                           0/0
Updating image state                            Done
Creating fast lookup database                   Done
Updating package cache                           1/1
Installation: Succeeded
 done.

        Done: Installation completed in 1328.592 seconds.


  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /system/zones/zone1/root/var/log/zones/zoneadm.20181109T163221Z.zone1.install

Start the zone:

zoneadm -z zone1 boot

Login to the zone console (disconnect with ~.) and finish setup with UI:

zlogin -C zone1

Check status:

zoneadm list -v

Show config:

zonecfg -z zone1 info -a
zoneadm list -ip

Shutdown a zone:

zoneadm -z zone1 shutdown

Networking

By default, new zones are created with an exclusive IP network resource: a zone has access to a complete network stack, eg has its own IP address and routing.

A network resource called anet with the following properties was automatically created:

ip-type is exclusive
linkname is net0
lower-link is auto
mac-address is random
link-protection is mac-nospoof

Confirm with:

zonecfg -z z1 info -a

This exists when the zone is running.

ipadm
dladm show-link

Setting resource limits

Dedicated CPUs (set min 1, max 3: requres svc:/system/pools/dynamic to be enabled) to a zone:

# zonecfg -z zone1
zonecfg:zone1> add dedicated-cpu
zonecfg:zone1:dedicated-cpu> set ncpus=1-3
zonecfg:zone1:dedicated-cpu> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit

("select" to enter a resource once it exists. "remove" to delete)

CPU caps (alternative to dedicated CPUs) can offer finer grained control. Set CPU cap (proportion guaranteed if there is contention), eg 150% of one CPU:

# zonecfg -z zone1
zonecfg:zone1> add capped-cpu
zonecfg:zone1:capped-cpu> set ncpus=1.5
zonecfg:zone1:capped-cpu> end

Set Memory cap:

zonecfg:zone1> add capped-memory
zonecfg:zone1:capped-memory> set physical=512m
zonecfg:zone1:capped-memory> set swap=1024m
zonecfg:zone1:capped-memory> set locked=128m
zonecfg:zone1:capped-memory> end

Zones can be made immutable with the file-mac-profile property:

  • none
    • Normal read/write
  • strict
    • Read-only file system, no exceptions.
  • fixed-configuration
    • Permits updates to /var/* directories, with the exception of directories that contain system configuration components: IPS packages, including new packages, cannot be installed. Persistently enabled SMF services are fixed. SMF manifests cannot be added from the default locations. Logging and auditing configuration files can be local. syslog and audit configuration are fixed.
  • flexible-configuration
    • Permits modification of files in /etc/* directories, changes to root's home directory, and updates to /var/* directories. IPS packages, including new packages, cannot be installed. Persistently enabled SMF services are fixed. SMF manifests cannot be added from the default locations. Logging and auditing configuration files can be local. syslog and audit configuration can be changed.

The mutability setting can be observed:

# zoneadm list -p
0:global:running:/::solaris:shared:-:none::
1:z2:running:/system/zones/zone2:e4755797-169b-4f5b-b016-a28ccfbff24a:solaris:excl:-:none::
4:z1:running:/system/zones/zone1:f1784415-3fe3-4cca-8f23-ac7c9180664f:solaris:excl:R:fixed-configuration::

Here, zone1 is immutable.

Creating a template

Create a template based on zone "z1":

zlogin z1
sysconfig create-profile -o /root/z1-template

A configuration file will be created at z2-template/sc_profile.xml. From the global zone, stop z1.

zonecfg -z z1 export -f /zones/z2-profile

Copy the system configuration template:

cp /system/zones/z1/root/root/z2-template/sc_profile.xml z2-template.xml

Create zone 2 based on the z1 template:

zonecfg -z z2 -f /root/z2-profile
zoneadm -z 22 clone -c /root/z2-template.xml z1