I use Synology DS920+ in order to provision disks for Oracle ASM. Let’s start leveraging it:
Login to DS920+ and click on “SAN Manager” app:

This is how it looks in my test environment:

Creating the iSCSI targets:

We will create 6 target disks that will be managed by ASM:
– TGT-ASM-MGMT1 => 50GB
– TGT-ASM-DATA1 => 50GB
– TGT-CRS1 => 30GB
– TGT-CRS2 => 30GB
– TGT-CRS3 => 30GB
– TGT-RECO1 => 100GB
Click on “Create”

Click Next

Create new LUN. Click Next

Give a proper name & size. Click Next

Click Done.
Now we can repeat the same steps to create the other disks.
In the end we get to this overview:

Now let’s add the disks to a linux image:
[email protected] ~]# iscsiadm --mode node --targetname iqn.2000-01.com.synology:TGT-ASM-MGMT1.fb3575abee2 --portal nas1 --login Logging in to [iface: default, target: iqn.2000-01.com.synology:TGT-ASM-MGMT1.fb3575abee2, portal: 192.168.1.2,3260] Login to [iface: default, target: iqn.2000-01.com.synology:TGT-ASM-MGMT1.fb3575abee2, portal: 192.168.1.2,3260] successful. [[email protected] ~]# iscsiadm --mode node --targetname iqn.2000-01.com.synology:TGT-ASM-DATA1.fb3575abee2 --portal nas1 --login Logging in to [iface: default, target: iqn.2000-01.com.synology:TGT-ASM-DATA1.fb3575abee2, portal: 192.168.1.2,3260] Login to [iface: default, target: iqn.2000-01.com.synology:TGT-ASM-DATA1.fb3575abee2, portal: 192.168.1.2,3260] successful. [[email protected] ~]# iscsiadm --mode node --targetname iqn.2000-01.com.synology:TGT-CRS1.fb3575abee2 --portal nas1 --login Logging in to [iface: default, target: iqn.2000-01.com.synology:TGT-CRS1.fb3575abee2, portal: 192.168.1.2,3260] Login to [iface: default, target: iqn.2000-01.com.synology:TGT-CRS1.fb3575abee2, portal: 192.168.1.2,3260] successful. [[email protected] ~]# iscsiadm --mode node --targetname iqn.2000-01.com.synology:TGT-CRS2.fb3575abee2 --portal nas1 --login Logging in to [iface: default, target: iqn.2000-01.com.synology:TGT-CRS2.fb3575abee2, portal: 192.168.1.2,3260] Login to [iface: default, target: iqn.2000-01.com.synology:TGT-CRS2.fb3575abee2, portal: 192.168.1.2,3260] successful. [[email protected] ~]# iscsiadm --mode node --targetname iqn.2000-01.com.synology:TGT-CRS3.fb3575abee2 --portal nas1 --login Logging in to [iface: default, target: iqn.2000-01.com.synology:TGT-CRS3.fb3575abee2, portal: 192.168.1.2,3260] Login to [iface: default, target: iqn.2000-01.com.synology:TGT-CRS3.fb3575abee2, portal: 192.168.1.2,3260] successful. [[email protected] ~]# iscsiadm --mode node --targetname iqn.2000-01.com.synology:TGT-RECO1.fb3575abee2 --portal nas1 --login Logging in to [iface: default, target: iqn.2000-01.com.synology:TGT-RECO1.fb3575abee2, portal: 192.168.1.2,3260] Login to [iface: default, target: iqn.2000-01.com.synology:TGT-RECO1.fb3575abee2, portal: 192.168.1.2,3260] successful. [[email protected] ~]# fdisk -l Disk /dev/sda: 300 GiB, 322122547200 bytes, 629145600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0ec5508a Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 2099199 2097152 1G 83 Linux /dev/sda2 2099200 626968575 624869376 298G 8e Linux LVM Disk /dev/mapper/ol-root: 70 GiB, 75161927680 bytes, 146800640 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-swap: 7.9 GiB, 8485076992 bytes, 16572416 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-u02: 93.1 GiB, 100000595968 bytes, 195313664 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-home: 20 GiB, 21474836480 bytes, 41943040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-u01: 93.1 GiB, 100000595968 bytes, 195313664 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ol-tmp: 13.8 GiB, 14801698816 bytes, 28909568 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdb: 50 GiB, 53687091200 bytes, 104857600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc: 50 GiB, 53687091200 bytes, 104857600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdd: 30 GiB, 32212254720 bytes, 62914560 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sde: 30 GiB, 32212254720 bytes, 62914560 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdf: 30 GiB, 32212254720 bytes, 62914560 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdg: 100 GiB, 107374182400 bytes, 209715200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
[[email protected] scripts]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 300G 0 disk |-sda1 8:1 0 1G 0 part /boot `-sda2 8:2 0 298G 0 part |-ol-root 252:0 0 70G 0 lvm / |-ol-swap 252:1 0 7.9G 0 lvm [SWAP] |-ol-u02 252:2 0 93.1G 0 lvm /u02 |-ol-home 252:3 0 20G 0 lvm /home |-ol-u01 252:4 0 93.1G 0 lvm /u01 `-ol-tmp 252:5 0 13.8G 0 lvm /tmp sdb 8:16 0 50G 0 disk sdc 8:32 0 50G 0 disk sdd 8:48 0 30G 0 disk sde 8:64 0 30G 0 disk sdf 8:80 0 30G 0 disk sdg 8:96 0 100G 0 disk sr0 11:0 1 1024M 0 rom sr1 11:1 1 1024M 0 rom
Make them shareable
If you want to use it for more than 1 node, you have to make these iSCSI targets shareable like this:
Click on the iSCSI menu option to list all the disks:

Click on the disk you want to share and then click Edit:

Click on Advanced:

Click on “Allow multiple sessions from one or more iSCSI initiators”:

Click Save.