FM Broadcast Band and Linux - Disks and Filesystems: Difference between pages

From Smithnet Wiki
(Difference between pages)
Jump to navigation Jump to search
m (24 revisions imported)
 
 
Line 1: Line 1:
See also [http://www.transmissionzero.co.uk/radio/london-fm-radio/ here]


{| class="wikitable"
== iSCSI ==
! Freq (MHz)
! Station
! Preset (Sony)
! Preset (Pioneer)
! Transmitter / Comments


|-
* Block storage provider: iSCSI Target
|88.4
* Storage client: iSCSI Initiator
|[https://visionradiouk.com/ Vision Radio]
* Dynamic Discovery: Initiator sends  'SendTargets' request to a single IP/port and if the target listens on multiple names and addresses, all of them are sent in a form of TargetName and TargetAddress (IP:port#).
||
* See [https://en.wikipedia.org/wiki/ISCSI here] for background and IQN naming.
|
|-
|88.8
|BBC Radio 2
|A2
|2
|Crystal Palace
|-
|89.1
|BBC Radio 2
|
|
|Wrotham
|-
|89.4
|Citylock Radio
|
|
|
|-
|90.0
|?
|
|
|
|-
|90.8
|Lightnin
|
|
|
|-
|91.0
|BBC Radio 3
|A3
|3
|Crystal Palace
|-
|91.3
|BBC Radio 3
|
|
|Wrotham
|-
|92.4
|Rainbow
|
|
|
|-
|93.2
|BBC Radio 4
|A4
|4
|Crystal Palace
|-
|93.5
|BBC Radio 4
|
|
|Wrotham
|-
|94.4
|[http://selectradioapp.com/ Select Radio]
|B8
|16
|Countisbury House, Dulwich
|-
|94.9
|BBC Radio London
|A8
|
|Crystal Palace
|-
|95.1
|[https://divineradiolondon.com/ Divine Radio]
|
|
|
|-
|95.5
|OnTop FM
|
|
|
|-
|95.8
|Capital London
|A9
|7
|Croydon
|-
|96.1
|[http://www.sdancelive.com/ S-Dance]
|B9
|15
|
|-
|96.9
|Capital Xtra
|A0
|
|Crystal Palace
|-
|97.3
|LBC
|B1
|8
|Croydon
|-
|98.5
|BBC Radio 1
|A1
|1
|Crystal Palace
|-
|98.8
|BBC Radio 1
|
|
|Wrotham
|-
|99.3
|[http://digitalsoulradio.com/ Digital Soul Radio]
|B0
|14
|Mono
|-
|99.5
|Venture FM
|
|
|
|-
|100.0
|Kiss
|B2
|9
|Croydon
|-
|100.6
|Classic FM
|A6
|5
|Crystal Palace
|-
|100.9
|Classic FM
|
|
|Wrotham
|-
|101.4
|[https://flexfm.co.uk/ Flex FM]
|
|
|
|-
|102.2
|Smooth
|B3
|10
|Croydon
|-
|104.9
|Radio X
|B4
|11
|Crystal Palace
|-
|105.4
|Magic
|B5
|
|Croydon
|-
|105.8
|Greatest Hits
|A7
|6
|Crystal Palace
|-
|106.2
|Heart London
|B6
|12
|Croydon
|-
|106.5
|[http://wk-end.co.uk/ WK-END]
|
|
|
|-
|107.3
|[https://www.reprezentradio.org.uk/ Reprezent]
|
|
|
|-
|107.5
|Time 107.5
|
|
|
|-
|107.8
|Radio Jackie
|
|
|
|-


|}
=== Target ===
 
* Install package (and dependencies: targetcli
* Choose/create local area for disk images: /iscsi_disks
 
Start admin utility:
targetcli
/> cd /backstores/fileio
/backstores/fileio> create disk01 /iscsi_disks/disk01.img 10G
/backstores/fileio> cd /iscsi
/iscsi> create iqn.2000-01.com.example:storage.target01
/iscsi> cd iqn.2000-01.com.example:storage.target01/tpg1/luns
/iscsi/iqn.20...t01/tpg1/luns> create /backstores/fileio/disk01
/iscsi/iqn.20...t00/tpg1/luns> cd ../acls
/iscsi/iqn.20...t00/tpg1/acls> create iqn.2000-01.com.example:initiator01
/iscsi/iqn.20...t00/tpg1/acls> cd iqn.2000-01.com.example:initiator01
/iscsi/iqn.20...an-server01> set auth userid=someuser
/iscsi/iqn.20...an-server01> set auth password=somepass
exit
 
Other commands within targetcli:
ls
delete [object]
help
 
systemctl enable target
systemctl start target
 
If necessary, open firewall for 3260:
firewall-cmd --add-service=iscsi-target --permanent
firewall-cmd --reload
 
See also [https://www.lisenet.com/2016/iscsi-target-and-initiator-configuration-on-rhel-7/ here]
 
=== Initiator ===
 
Install package: iscsi-initiator-utils
 
In /etc/iscsi/initiatorname.iscsi specify the iSCSI target:
InitiatorName=iqn.2000-01.com.example:initiator01
 
In /etc/iscsi/iscsid.conf:
node.session.auth.authmethod = CHAP
node.session.auth.username = username
node.session.auth.password = password
 
Discover target:
# iscsiadm -m discovery -t sendtargets -p san-server01
san-server01:3260,1 iqn.2000-01.com.example:storage.target01
 
More info:
iscsiadm -m node -o show
...
 
Login:
iscsiadm -m node --login
 
Confirm session:
iscsiadm -m session -o show
 
Confirm new device added (eg sdc):
cat /proc/partitions
 
Then, partition, format and mount /dev/sdc as normal.
 
Logout of iSCSI (after unmounting used filesystems):
iscsiadm -m node --logout
 
== Disk Management ==
 
=== Grub ===
 
When installing on a RAID 1 mirror for the OS grub boot loader only installs on the first disk, so it that fails you can't boot off the second. To copy loader to the second disk:
 
grub> find /grub/stage1
 
This should find (hd0,0) and (hd1,0) which correspond for /dev/sda and /dev/sdb. Then temporarily make sdb the first disk and install:
 
device (hd0) /dev/sdb
root (hd0,0)
setup (hd0)
 
=== HD Parameters ===
 
Show settings/features:
hdparm -I /dev/sda
 
Test transfer rate:
hdparm -t --direct /dev/sda
 
Show power management setting:
hdparm -B /dev/sda
 
=== MD RAIDs ===
 
Create an array of 2 disks in a RAID1 (mirror):
mdadm --create /dev/md0 -l 1 -n 2 /dev/sdb1 /dev/sdc1
 
Monitor status with:
mdadm --detail /dev/md0
cat /proc/mdstat
 
Ensure RAID is detected at boot time:
mdadm -Es >> /etc/mdadm.conf
 
Remove a device from an array:
mdadm --remove /dev/md0 /dev/sdb1
 
Fail a drive in an array:
mdadm --fail /dev/md0 /dev/sdb1
 
Add a device to an array:
mdadm --add /dev/md0 /dev/sdb1
 
The /etc/cron.weekly/99-raid-check script can sometimes report:
WARNING: mismatch_cnt is not 0 on /dev/md1
 
The actual mismatch count can be found:
cat /sys/block/md1/md/mismatch_cnt
 
A repair and rebuild can be:
echo repair > /sys/block/md1/md/sync_action
echo check > /sys/block/md1/md/sync_action
 
=== Partitioning ===
 
==== FDisk ====
 
Supports MBR partitions
 
==== Parted ====
 
Supports MBR and GPT
 
See [https://www.gnu.org/software/parted/manual/html_chapter/parted_2.html#SEC8 Manual]
 
=== LVM ===
 
==== Physical Volumes ====
 
To create a PV out of two partions:
pvcreate /dev/sdc1 /dev/sdd1
 
To show current PVs:
pvscan
 
==== Volume Groups ====
 
To create a VG:
vgcreate vg00 /dev/sd[cd]
 
To show all current VGs:
vgscan
 
To show details of a VG (including free PEs):
vgdisplay vg00
 
To extend a volume group by adding a new PV:
vgextend vg00 /dev/sde
 
To make a volume group available:
vgchange -ay vg00
 
==== Logical Volumes ====
 
To create a new LV:
lvcreate --size 100M vg00 -n lv00
or change --size option to --extents 500 or --extents 60%VG or -l 100%FREE 
 
eg to create a RAID5 array out of 3 disks (2 data):
lvcreate -n lv00 --type raid5 -i 2 --extents 100%FREE vg00
 
Show status of LVM RAID:
lvs -a vg00
 
To rename a LV in VG vg01:
lvrename vg00 lvold lvnew
 
To remove a LV:
lvremove vg00/lv01
 
To show current LVs:
lvscan
 
== Filesystems ==
 
To format with 1% minfree, large file support (see types in /etc/mke2fs.conf), journalling and a label:
mkfs.ext4 -m 1 -T largefile4 -j -L /home /dev/mapper/vg00-lv00
 
To alter the label:
e2label /dev/sda newlabel
 
To mount at boot time, enter in
* /etc/fstab
 
Or to use XFS on a LV:
mkfs.xfs -L /home /dev/mapper/vg0-lv0
 
=== BTRFS ===
 
See also [https://btrfs.wiki.kernel.org/index.php/SysadminGuide here]
 
Create a RAID5 array for data and metadata:
mkfs.btrfs -L data -d raid5 -m raid5 -f /dev/sdc /dev/sdd /dev/sde
 
View usage:
btrfs filesystem usage /data
 
Look for btrfs filesystems:
blkid --match-token TYPE=btrfs
 
Create subvolume:
  btrfs subvolume create /data/db
 
Info:
btrfs subvolume list /data
btrfs subvolume show /data/db
 
Delete subvolume:
  btrfs subvolume delete /data/db
 
Mount with compression option in fstab:
compress=zstd:1
where the algorirm could also be lzo or zlib. Compression level can be increased to 2 or 3
 
Degragment:
  btrfs filesystem defragment -r /
 
== Loopback Filesystem ==
 
dd if=/dev/zero of=loopback.img bs=1024M count=5
losetup -fP loopback.img
 
To show loopback device(s):
losetup -a
losetup -l
 
To delete loopback device:
losetup -d /dev/loop0
 
Then, create filesystem, eg:
mkfs.xfs -L backups loopback.img
 
And mount /dev/loop0 (-o loop) as a traditional device. Note: the losetup configuration is lost at restart so can't be added to /etc/fstab for at-boot mounting.
 
== Smarttools ==
 
/etc/smartmontools/smartd.conf
 
Default to scan ATA/SCSI devices and report problems to root:
DEVICESCAN -H -m root -M exec /usr/libexec/smartmontools/smartdnotify -n standby,10,q
 
Or a specific device, and email an external user:
/dev/sda -a -o on -S on -s (S/../.././02|L/../../6/03) -m user@domain.com
 
Scan for devices:
smartctl --scan
 
Show detailed information about a device:
smartctl --all /dev/sda

Revision as of 07:14, 20 March 2023

iSCSI

  • Block storage provider: iSCSI Target
  • Storage client: iSCSI Initiator
  • Dynamic Discovery: Initiator sends 'SendTargets' request to a single IP/port and if the target listens on multiple names and addresses, all of them are sent in a form of TargetName and TargetAddress (IP:port#).
  • See here for background and IQN naming.

Target

  • Install package (and dependencies: targetcli
  • Choose/create local area for disk images: /iscsi_disks

Start admin utility:

targetcli
/> cd /backstores/fileio
/backstores/fileio> create disk01 /iscsi_disks/disk01.img 10G
/backstores/fileio> cd /iscsi
/iscsi> create iqn.2000-01.com.example:storage.target01
/iscsi> cd iqn.2000-01.com.example:storage.target01/tpg1/luns
/iscsi/iqn.20...t01/tpg1/luns> create /backstores/fileio/disk01
/iscsi/iqn.20...t00/tpg1/luns> cd ../acls
/iscsi/iqn.20...t00/tpg1/acls> create iqn.2000-01.com.example:initiator01
/iscsi/iqn.20...t00/tpg1/acls> cd iqn.2000-01.com.example:initiator01
/iscsi/iqn.20...an-server01> set auth userid=someuser
/iscsi/iqn.20...an-server01> set auth password=somepass
exit

Other commands within targetcli:

ls
delete [object]
help
systemctl enable target
systemctl start target

If necessary, open firewall for 3260:

firewall-cmd --add-service=iscsi-target --permanent
firewall-cmd --reload

See also here

Initiator

Install package: iscsi-initiator-utils

In /etc/iscsi/initiatorname.iscsi specify the iSCSI target:

InitiatorName=iqn.2000-01.com.example:initiator01

In /etc/iscsi/iscsid.conf:

node.session.auth.authmethod = CHAP
node.session.auth.username = username
node.session.auth.password = password

Discover target:

# iscsiadm -m discovery -t sendtargets -p san-server01
san-server01:3260,1 iqn.2000-01.com.example:storage.target01

More info:

iscsiadm -m node -o show
...

Login:

iscsiadm -m node --login

Confirm session:

iscsiadm -m session -o show

Confirm new device added (eg sdc):

cat /proc/partitions

Then, partition, format and mount /dev/sdc as normal.

Logout of iSCSI (after unmounting used filesystems):

iscsiadm -m node --logout

Disk Management

Grub

When installing on a RAID 1 mirror for the OS grub boot loader only installs on the first disk, so it that fails you can't boot off the second. To copy loader to the second disk:

grub> find /grub/stage1

This should find (hd0,0) and (hd1,0) which correspond for /dev/sda and /dev/sdb. Then temporarily make sdb the first disk and install:

device (hd0) /dev/sdb
root (hd0,0)
setup (hd0)

HD Parameters

Show settings/features:

hdparm -I /dev/sda

Test transfer rate:

hdparm -t --direct /dev/sda

Show power management setting:

hdparm -B /dev/sda

MD RAIDs

Create an array of 2 disks in a RAID1 (mirror):

mdadm --create /dev/md0 -l 1 -n 2 /dev/sdb1 /dev/sdc1

Monitor status with:

mdadm --detail /dev/md0
cat /proc/mdstat

Ensure RAID is detected at boot time:

mdadm -Es >> /etc/mdadm.conf

Remove a device from an array:

mdadm --remove /dev/md0 /dev/sdb1

Fail a drive in an array:

mdadm --fail /dev/md0 /dev/sdb1

Add a device to an array:

mdadm --add /dev/md0 /dev/sdb1

The /etc/cron.weekly/99-raid-check script can sometimes report:

WARNING: mismatch_cnt is not 0 on /dev/md1

The actual mismatch count can be found:

cat /sys/block/md1/md/mismatch_cnt

A repair and rebuild can be:

echo repair > /sys/block/md1/md/sync_action
echo check > /sys/block/md1/md/sync_action

Partitioning

FDisk

Supports MBR partitions

Parted

Supports MBR and GPT

See Manual

LVM

Physical Volumes

To create a PV out of two partions:

pvcreate /dev/sdc1 /dev/sdd1

To show current PVs:

pvscan

Volume Groups

To create a VG:

vgcreate vg00 /dev/sd[cd]

To show all current VGs:

vgscan

To show details of a VG (including free PEs):

vgdisplay vg00

To extend a volume group by adding a new PV:

vgextend vg00 /dev/sde

To make a volume group available:

vgchange -ay vg00

Logical Volumes

To create a new LV:

lvcreate --size 100M vg00 -n lv00

or change --size option to --extents 500 or --extents 60%VG or -l 100%FREE

eg to create a RAID5 array out of 3 disks (2 data):

lvcreate -n lv00 --type raid5 -i 2 --extents 100%FREE vg00

Show status of LVM RAID:

lvs -a vg00

To rename a LV in VG vg01:

lvrename vg00 lvold lvnew

To remove a LV:

lvremove vg00/lv01

To show current LVs:

lvscan

Filesystems

To format with 1% minfree, large file support (see types in /etc/mke2fs.conf), journalling and a label:

mkfs.ext4 -m 1 -T largefile4 -j -L /home /dev/mapper/vg00-lv00

To alter the label:

e2label /dev/sda newlabel

To mount at boot time, enter in

  • /etc/fstab

Or to use XFS on a LV:

mkfs.xfs -L /home /dev/mapper/vg0-lv0

BTRFS

See also here

Create a RAID5 array for data and metadata:

mkfs.btrfs -L data -d raid5 -m raid5 -f /dev/sdc /dev/sdd /dev/sde

View usage:

btrfs filesystem usage /data

Look for btrfs filesystems:

blkid --match-token TYPE=btrfs

Create subvolume:

 btrfs subvolume create /data/db

Info:

btrfs subvolume list /data
btrfs subvolume show /data/db

Delete subvolume:

 btrfs subvolume delete /data/db

Mount with compression option in fstab:

compress=zstd:1

where the algorirm could also be lzo or zlib. Compression level can be increased to 2 or 3

Degragment:

 btrfs filesystem defragment -r /

Loopback Filesystem

dd if=/dev/zero of=loopback.img bs=1024M count=5
losetup -fP loopback.img

To show loopback device(s):

losetup -a
losetup -l

To delete loopback device:

losetup -d /dev/loop0

Then, create filesystem, eg:

mkfs.xfs -L backups loopback.img

And mount /dev/loop0 (-o loop) as a traditional device. Note: the losetup configuration is lost at restart so can't be added to /etc/fstab for at-boot mounting.

Smarttools

/etc/smartmontools/smartd.conf

Default to scan ATA/SCSI devices and report problems to root:

DEVICESCAN -H -m root -M exec /usr/libexec/smartmontools/smartdnotify -n standby,10,q

Or a specific device, and email an external user:

/dev/sda -a -o on -S on -s (S/../.././02|L/../../6/03) -m user@domain.com

Scan for devices:

smartctl --scan

Show detailed information about a device:

smartctl --all /dev/sda