Storage Limits and System Configuration

Storage limits


snapshot max 255 per volume

qtree max 4995 per volume

vol max (64bit) 16TB per volume

flexvol max FAS2040: 200 & All other models: 500 per filer


flexvol min 20MB

flexvol vol (32-bit) 16TB


flexvol vol (64-bit) 100TB or Model-dependent

trad vol max 16TB per volume

aggr max (32-bit) 16TB


aggr max (64-bit) Model-dependent

aggr min size RAID-DP: 3 disks /  RAID4: 2 disks

raid group max 150 per aggr


raid group max 400 per filer

max lun size 16TB per lun


steps:

1) enable sm on source & destination filer

source-filer> options snapmirror.enable 
snapmirror.enable            on 
source-filer> options snapmirror.access 
snapmirror.access            legacy 

2) snapmirror access

make sure destination filer has snapmirror access to the source filer.

source-filer> rdfile /etc/snapmirror.allow 
destination-filer 
destination-filer2

3) initializing a snapmirror relation

destination-filer> vol create demo_destination aggr01 100g 
destination-filer> vol restrict demo_destination

destination-filer> snapmirror initialize -s source-filer:demo_source destination-filer:demo_destination 
destination-filer> snapmirror initialize -s source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/

4) monitoring the status

destination-filer> snapmirror status 
snapmirror is on. 
source                          destination                          state          lag status 
source-filer:demo_source        destination-filer:demo_destination   uninitialized  –   transferring (1690 mb done) 
source-filer:/vol/demo1/qtree   destination-filer:/vol/demo1/qtree   uninitialized  –   transferring (32 mb done) 

5) snapmirror schedule

you can set a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron style frequency.

destination-filer> rdfile /etc/snapmirror.conf 
source-filer:demo_source        destination-filer:demo_destination – 0 * * *  # this syncs every hour 
source-filer:/vol/demo1/qtree   destination-filer:/vol/demo1/qtree – 0 21 * * # this syncs every 9:00 pm


User-specified Snapshot copy schedules

You can configure weekly, nightly, or hourly Snapshot copy schedules using the snap sched command.

TypeDescription
WeeklySnapshot copies every Sunday at midnight.
NightlySnapshot copies every night at midnight, except when a weekly Snapshot copy is scheduled to occur at the same time.
HourlySnapshot copies on the hour or at specified hours, except if a weekly or nightly Snapshot copy is scheduled to occur at the same time.


Snapvault commands

Add license
Source filer – filer1


filer1> license add XXXXX
filer1> options snapvault.enable on
filer1> options snapvault.access host=svfiler


Destination filer – svfiler


svfiler> license add XXXXX
svfiler> options snapvault.enable on
svfiler> options snapvault.access host=filer1


Disabling destination filer snapshot sched


svfiler> snap sched demo_vault 0 0 0


Creating Initial backup:


On the destination filer execute the below commands to initiate the base-line transfer. The time taken to complete depends upon the size of data on the source qtree and the network bandwidth. Check “snapvault status” on source/destination filers for monitoring the base-line transfer progress.


svfiler> snapvault start -S filer1:/vol/datasource/qtree1  svfiler:/vol/demo_vault/qtree1


Creating backup schedules:


On source filer:


filer1> snapvault snap sched datasource sv_hourly 2@0-22  
filer1> snapvault snap sched datasource sv_daily  2@23
filer1> snapvault snap sched datasource sv_weekly 2@21@sun


On snapvault filer:
If you don’t use the -x option, the secondary does not contact the primary and transfer the Snapshot copy. It just creates a snapshot copy of the destination volume.


svfiler> snapvault snap sched -x demo_vault sv_hourly 6@0-22
svfiler> snapvault snap sched -x demo_vault sv_daily  14@23@sun-fri
svfiler> snapvault snap sched -x demo_vault sv_weekly 6@23@sun


To check the snapvault status, use the command “snapvault status” either on source or destination filer. And to see the backups, do a “snap list” on the destination volume – that will give you all the backup copies, time of creation etc.



Image

Interface Groups

Create (single-mode)

# To create a single-mode interface group, enter the following command:
ifgrp create single SingleTrunk1 e0 e1 e2 e3

# To configure an IP address of 192.168.0.10 and a netmask of 255.255.255.0 on the singlemode interface group SingleTrunk1
ifconfig SingleTrunk1 192.168.0.10 netmask 255.255.255.0

# To specify the interface e1 as preferred
ifgrp favor e1

Create ( multi-mode)

# To create a static multimode interface group, comprising interfaces e0, e1, e2, and e3 and using MAC 
# address load balancing
ifgrp create multi MultiTrunk1 -b mac e0 e1 e2 e3

# To create a dynamic multimode interface group, comprising interfaces e0, e1, e2, and e3 and using IP 
# address based load balancing
ifgrp create lacp MultiTrunk1 -b ip e0 e1 e2 e3

Create second level intreface group

# To create two interface groups and a second-level interface group. In this example, IP address load 
# balancing is used for the multimode interface groups.
ifgrp create multi Firstlev1 e0 e1
ifgrp create multi Firstlev2 e2 e3
ifgrp create single Secondlev Firstlev1 Firstlev2

# To enable failover to a multimode interface group with higher aggregate bandwidth when one or more of 
# the links in the active multimode interface group fail
options ifgrp.failover.link_degraded on

Note: You can create a second-level interface group by using two multimode interface groups. Secondlevel interface groups enable you to provide a standby multimode interface group in case the primary multimode interface group fails.

Create second level intreface group in a HA pair# Use the following commands to create a second-level interface group in an HA pair. In this example, 
# IP-based load balancing is used for the multimode interface groups.

# On StorageSystem1:
ifgrp create multi Firstlev1 e1 e2
ifgrp create multi Firstlev2 e3 e4
ifgrp create single Secondlev1 Firstlev1 Firstlev2

# On StorageSystem2 :
ifgrp create multi Firstlev3 e5 e6
ifgrp create multi Firstlev4 e7 e8
ifgrp create single Secondlev2 Firstlev3 Firstlev4

# On StorageSystem1:
ifconfig Secondlev1 partner Secondlev2

# On StorageSystem2 :
ifconfig Secondlev2 partner Secondlev1
Favoured/non-favoured interface

# select favoured interface
ifgrp nofavor e3

# select a non-favoured interface 
ifgrp nofavor e3

Addifgrp add MultiTrunk1 e4
Deleteifconfig MultiTrunk1 down
ifgrp delete MultiTrunk1 e4

Note: You must configure the interface group to the down state before you can delete a network interface
from the interface group
Destroy

ifconfig ifgrp_name down
ifgrp destroy ifgrp_name

Note: You must configure the interface group to the down state before you can delete a network interface
from the interface group

Enable/disable a interface groupifconfig ifgrp_name up 
ifconfig ifgrp_name down
Statusifgrp status [ifgrp_name]
Statifgrp stat [ifgrp_name] [interval]


Diagnostic Tools

Useful options
Ping thottling# Throttle ping 
options ip.ping_throttle.drop_level

# Disable ping throttling
options ip.ping_throttle.drop_level 0
Forged IMCP attacksoptions ip.icmp_ignore_redirect.enable on

Note: You can disable ICMP redirect messages to protect your storage system against forged ICMP redirect attacks.
Useful Commands
netdiagThe netdiag command continuously gathers and analyzes statistics, and performs diagnostic tests. These diagnostic tests identify and report problems with your physical network or transport layers and suggest remedial action.
pingYou can use the ping command to test whether your storage system can reach other hosts on your network.
pkttYou can use the pktt command to trace the packets sent and received in the storage system’s network.


File Access using NFS

Export Options
actual=Specifies the actual file system path corresponding to the exported file system path.
anon=|Specifies the effective user ID (or name) of all anonymous or root NFS client users that access the file system path.
nosuidDisables setuid and setgid executables and mknod commands on the file system path.
ro | ro=clientidSpecifies which NFS clients have read-only access to the file system path.
rw | rw=clientidSpecifies which NFS clients have read-write access to the file system path.
root=clientidSpecifies which NFS clients have root access to the file system path. If you specify the root= option, you must specify at least one NFS client identifier. To exclude NFS clients from the list, prepend the NFS client identifiers with a minus sign (-).
sec=sectype

Specifies the security types that an NFS client must support to access the file system path. To apply the security types to all types of access, specify the sec= option once. To apply the security types to specific types of access (anonymous, non-super user, read-only, read-write, or root), specify the sec= option at least twice, once before each access type to which it applies (anon, nosuid, ro, rw, or root, respectively).

security types could be one of the following:

none

No security. Data ONTAP treats all of the NFS client’s users as anonymous users.

sysStandard UNIX (AUTH_SYS) authentication. Data ONTAP checks the NFS credentials of all of the
NFS client’s users, applying the file access permissions specified for those users in the NFS server’s /etc/passwd file. This is the default security type.
krb5Kerberos(tm) Version 5 authentication. Data ONTAP uses data encryption standard (DES) key
encryption to authenticate the NFS client’s users.
krb5iKerberos(tm) Version 5 integrity. In addition to authenticating the NFS client’s users, Data
ONTAP uses message authentication codes (MACs) to verify the integrity of the NFS client’s remote procedure requests and responses, thus preventing “man-in-the-middle” tampering.
krb5pKerberos(tm) Version 5 privacy. In addition to authenticating the NFS client’s users and verifying data integrity, Data ONTAP encrypts NFS arguments and results to provide privacy.
Examples

rw=10.45.67.0/24
ro,root=@trusted,rw=@friendly
rw,root=192.168.0.80,nosuid

Export Commands
Displaying

exportfs
exportfs -q

create

# create export in memory and write to /etc/exports (use default options) 
exportfs -p /vol/nfs1

# create export in memory and write to /etc/exports (use specific options)
exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1

# create export in memory only using own specific options 
exportsfs -io sec=none,rw,root=192.168.0.80,nosuid /vol/nfs1

remove# Memory only
exportfs -u

# Memory and /etc/exportfs
exportfs -z
export allexportfs -a
check accessexportfs -c 192.168.0.80 /vol/nfs1
flushexportfs -f 
exportfs -f
reloadexportfs -r
storage pathexportfs -s
Write export to a fileexportfs -w
fencing

# Suppose /vol/vol0 is exported with the following export options:
  
   -rw=pig:horse:cat:dog,ro=duck,anon=0

# The following command enables fencing of cat from /vol/vol0
exportfs -b enable save cat /vol/vol0

# cat moves to the front of the ro= list for /vol/vol0:

   -rw=pig:horse:dog,ro=cat:duck,anon=0

statsnfsstat


File Access using CIFS

Useful CIFS options
change the security styleoptions wafl.default_security_style {ntfs | unix | mixed}
timeoutoptions cifs.idle_timeout time
Performanceoptions cifs.oplocks.enable on

Note: Under some circumstances, if a process has an exclusive oplock on a file and a second process attempts to open the file, the first process must invalidate cached data and flush writes and locks. The client must then relinquish the oplock and access to the file. If there is a network failure during this flush, cached write data might be lost.
CIFS Commands
useful files
/etc/cifsconfig_setup.cfg
/etc/usermap.cfs
/etc/passwd
/etc/cifsconfig_share.cfg


Note: use “rdfile” to read the file
CIFS setupcifs setup

Note: you will be prompted to answer a number of questions based on what requirements you need.
startcifs restart
stopcifs terminate

# terminate a specific client
cifs terminate |
sessionscifs sessions
cifs sessions
cifs sessions

# Authentication
cifs sessions -t

# Changes 
cifs sessions -c 

# Security Info
cifs session -s
Broadcast messagecifs broadcast * “message”
cifs broadcast “message”
permissions

cifs access

# Examples
cifs access sysadmins -g wheel Full Control
cifs access -delete releases ENGINEERING\mary

Note: rights can be Unix-style combinations of r w x – or NT-style “No Access”, “Read”, “Change”, and “Full Control”

|group>
statscifs stat  
cifs stat
cifs stat
create a share

# create a volume in the normal way

# then using qtrees set the style of the volume {ntfs | unix | mixed}

# Now you can create your share
cifs shares -add TEST /vol/flexvol1/TEST -comment “Test Share ” -forcegroup workgroup -maxusers 100

change share characteristicscifs shares -change sharename {-browse | -nobrowse} {-comment desc | – nocomment} {-maxusers userlimit | -nomaxusers} {-forcegroup groupname | -noforcegroup} {-widelink | -nowidelink} {-symlink_strict_security | – nosymlink_strict_security} {-vscan | -novscan} {-vscanread | – novscanread} {-umask mask | -noumask {-no_caching | -manual_caching | – auto_document_caching | -auto_program_caching}

# example
cifs shares -change -novscan
home directories# Display home directories
cifs homedir

# Add a home directory 
wrfile -a /etc/cifs_homedir.cfg /vol/TEST

# check it
rdfile /etc/cifs_homedir.cfg

# Display for a Windows Server
net view \\

# Connect
net use * \\192.168.0.75\TEST 

Note: make sure the directory exists
domain controller

# add a domain controller
cifs prefdc add lab 10.10.10.10 10.10.10.11

# delete a domain controller
cifs prefdc delete lab

# List domain information 
cifs domaininfo

# List the preferred controllers
cifs prefdc print

# Restablishing
cifs resetdc

change filers domain passwordcifs changefilerpwd
Tracing permission problems

sectrace add [-ip ip_address] [-ntuser nt_username] [-unixuser unix_username] [-path path_prefix] [-a]

#Examples
sectrace add -ip 192.168.10.23
sectrace add -unixuser foo -path /vol/vol0/home4 -a

# To remove
sectrace delete all
sectrace delete

# Display tracing
sectrace show

# Display error code status
sectrace print-status
sectrace print-status 1:51544850432:32:78 


FCP service
displayfcp show adapter -v
daemon statusfcp status
startfcp start
stopfcp stop
stats

fcp stats -i interval [-c count] [-a | adapter]

fcp stats -i 1

target expansion adaptersfcp config [down|up]

fcp config 4a down
target adapter speed

fcp config speed [auto|1|2|4|8]

fcp config 4a speed 8

set WWPN #

fcp portname set [-f] adapter wwpn

fcp portname set -f 1b 50:0a:09:85:87:09:68:ad

swap WWPN #

fcp portname swap [-f] adapter1 adapter2

fcp portname swap -f 1a 1b

change WWNN

# display nodename 
fcp nodename

fcp nodename [-f]nodename

fcp nodename 50:0a:09:80:82:02:8d:ff

Note: The WWNN of a storage system is generated by a serial number in its NVRAM, but it is stored ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this unlikely event, you can change the WWNN of the storage system.

WWPN Aliases – display

fcp wwpn-alias show
fcp wwpn-alias show -a my_alias_1
fcp wwpn-alias show -w 10:00:00:00:c9:30:80:2

WWPN Aliases – createfcp wwpn-alias set [-f] alias wwpn

fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f
WWPN Aliases – remove

fcp wwpn-alias remove [-a alias … | -w wwpn]

fcp wwpn-alias remove -a my_alias_1
fcp wwpn-alias remove -w 10:00:00:00:c9:30:80:2


iSCSI commands
displayiscsi initiator show
iscsi session show [-t]
iscsi connection show -v
iscsi security show
statusiscsi status
startiscsi start
stopiscsi stop
statsiscsi stats
nodenameiscsi nodename 

# to change the name
iscsi nodename
interfacesiscsi interface show

iscsi interface enable e0b
iscsi interface disable e0b
portalsiscsi portal show

Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The storage system’s target IP addresses are the addresses of the interfaces used for the iSCSI protocol
accesslistsiscsi interface accesslist show

Note: you can add or remove interfaces from the list


LUN configuration
Displaylun show 
lun show -m
lun show -v
Initialize/Configure LUNs, mapping

lun setup

Note: follow the prompts to create and configure LUN’s

Createlun create -s 100m -t windows /vol/tradvol1/lun1
Destroy

lun destroy [-f] /vol/tradvol1/lun1

Note: the “-f” will force the destroy

Resize

lun resize

lun resize /vol/tradvol1/lun1 75m

Restart block protocol accesslun online /vol/tradvol1/lun1
Stop block protocol access
lun offline /vol/tradvol1/lun1
Map a LUN to an initiator group

lun map /vol/tradvol1/lun1 win_hosts_group1 0
lun map -f /vol/tradvol1/lun2 linux_host_group1 1

lun show -m

Note: use “-f” to force the mapping

Remove LUN mappinglun show -m 
lun offline /vol/tradvol1 
lun unmap /vol/tradvol1/lun1 win_hosts_group1 0
Displays or zeros read/write statistics for LUNlun stats /vol/tradvol1/lun1
Commentslun comment /vol/tradvol1/lun1 “10GB for payroll records”
Check all lun/igroup/fcp settings for correctnesslun config_check -v
Manage LUN cloning

# Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command
snap create tradvol1 tradvol1_snapshot_08122010

# Create the LUN clone by entering the following command
lun clone create /vol/tradvol1/clone_lun1 -b /vol/tradvol1/tradvol1_snapshot_08122010 lun1

Show the maximum possible size of a LUN on a given volume or qtreelun maxsize /vol/tradvol1
Move (rename) LUNlun move /vol/tradvol1/lun1 /vol/tradvol1/windows_lun1
Display/change LUN serial numberlun serial -x /vol/tradvol1/lun1
Manage LUN propertieslun set reservation /vol/tradvol1/hpux/lun0
Configure NAS file-sharing propertieslun share { none | read | write | all }
Manage LUN and snapshot interactionslun snap usage -s


Quota Commands
Quotas configuration file/mroot/etc/quotas
Example quota file
##                                          	hard limit | thres |soft limit
##Quota Target      	type                   	disk  files| hold  |disk  file
##-------------     	-----                  	----  -----  ----- ----- ----
*                    tree@/vol/vol0           -     -      -     -     -     # monitor usage on all qtrees in vol0
/vol/vol2/qtree      tree                    1024K 75k    -     -     -     # enforce qtree quota using kb
tinh                 user@/vol/vol2/qtree1   100M   -      -     -     -     # enforce users quota in specified qtree
dba                  group@/vol/ora/qtree1   100M   -      -     -     -     # enforce group quota in specified qtree # * = default user/group/qtree # - = placeholder, no limit enforced, just enable stats collection Note: you have lots of permutations, so checkout the documentation
Displayingquota report []
Activating

quota on [-w]

Note:
-w = return only after the entire quotas file has been scanned 

Deactivitatingquota off [-w]
Reinitializingquota off [-w]  
quota on [-w]
Resizingquota resize

Note: this commands rereads the quota file
Deletingedit the quota file

quota resize
log messagingquota logmsg


QTree Commands
Displayqtree status [-i] [-v]

Note:
The -i option includes the qtree ID number in the display.
The -v option includes the owning vFiler unit, if the MultiStore license is enabled.
adding (create)## Syntax – by default wafl.default_qtree_mode option is used 
qtree create path [-m mode]

## create a news qtree in the /vol/users volume using 770 as permissions
qtree create /vol/users/news -m 770
Removerm -Rf
Renamemv
convert a directory into a qtree directory## Move the directory to a different directory 
mv /n/joel/vol1/dir1 /n/joel/vol1/olddir

## Create the qtree 
qtree create /n/joel/vol1/dir1

## Move the contents of the old directory back into the new QTree 
mv /n/joel/vol1/olddir/* /n/joel/vol1/dir1

## Remove the old directory name 
rmdir /n/joel/vol1/olddir
statsqtree stats [-z] [vol_name]

Note:
-z = zero stats
Change the security style

## Syntax
qtree security path {unix | ntfs | mixed}

## Change the security style of /vol/users/docs to mixed
qtree security /vol/users/docs mixed


Deduplication Commands
start/restart deduplication operationsis start -s

sis start -s /vol/flexvol1

## Use previous checkpoint
sis start -sp
stop deduplication operationsis stop
schedule deduplication

sis config -s

sis config -s mon-fri@23 /vol/flexvol1

Note: schedule lists the days and hours of the day when deduplication runs. The schedule can be of the following forms:

  • day_list[@hour_list]
    If hour_list is not specified, deduplication runs at midnight on each scheduled day.
  • hour_list[@day_list]
    If day_list is not specified, deduplication runs every day at the specified hours.
  • • –
    A hyphen (-) disables deduplication operations for the specified FlexVol volume.
enablingsis on
disablingsis off
statussis status -l
Display saved spacedf -s


FlexClone Commands
Displayvol status
vol status -v

df -Lh
adding (create)## Syntax
vol clone create clone_name [-s {volume|file|none}] -b parent_name [parent_snap]

## create a flexclone called flexclone1 from the parent flexvol1 
vol clone create flexclone1 -b flexvol1
Removing (destroy)vol offline
vol destroy
splitting## Determine the free space required to perform the split 
vol clone split estimate

## Double check you have the space
df -Ah

## Perform the split
vol clone split start

## Check up on its status
vol colne split status

## Stop the split
vol clone split stop
log file/etc/log/clone

The clone log file records the following information:
• Cloning operation ID
• The name of the volume in which the cloning operation was performed
• Start time of the cloning operation
• End time of the cloning operation
• Parent file/LUN and clone file/LUN names
• Parent file/LUN ID
• Status of the clone operation: successful, unsuccessful, or stopped and some other details


FlexVol Volume Operations (only)
Adding (creating)

## Syntax 
vol create vol_name [-l language_code] [-s {volume|file|none}] size{k|m|g|t}

## Create a 200MB volume using the english character set
vol create newvol -l en aggr1 200M

## Create 50GB flexvol volume 
vol create vol1 aggr0 50g

additional disks## add an additional disk to aggregate flexvol1, use “aggr status” to get group name 
aggr status flexvol1 -r 
aggr add flexvol1 -g rg0 -d v5.25
Resizingvol size [+|-] n{k|m|g|t}

## Increase flexvol1 volume by 100MB 
vol size flexvol1 + 100m
Automatically resizingvol autosize vol_name [-m size {k|m|g|t}] [-I size {k|m|g|t}] on

## automatically grow by 10MB increaments to max of 500MB
vol autosize flexvol1 -m 500m -I 10m on
Determine free space and Inodesdf -Ah
df -I
Determine sizevol size
automatic free space preservation

vol options try_first [volume_grow|snap_delete]

Note: 
If you specify volume_grow, Data ONTAP attempts to increase the volume’s size before deleting any Snapshot copies. Data ONTAP increases the volume size based on specifications you provided using the vol autosize command.

If you specify snap_delete, Data ONTAP attempts to create more free space by deleting Snapshot copies, before increasing the size of the volume. Data ONTAP deletes Snapshot copies based on the specifications you provided using the snap autodelete command.

display a FlexVol volume’s containing aggregatevol container
Cloning

vol clone create clone_vol [-s none|file|volume] -b parent_vol [parent_snap]

vol clone split start 
vol clone split stop
vol clone split estimate
vol clone split status

Note: The vol clone create command creates a flexible volume named clone_vol on the local filer that is a clone of a “backing” flexible volume named par_ent_vol. A clone is a volume that is a writable snapshot of another volume. Initially, the clone and its parent share the same storage; more storage space is consumed only as one volume or the other changes.

Copying

vol copy start [-S|-s snapshot]
vol copy status

vol copy abort
vol copy throttle

## Example – Copies the nightly snapshot named nightly.1 on volume vol0 on the local filer to the volume vol0 on remote ## filer named toaster1.
vol copy start -s nightly.1 vol0 toaster1:vol0

Note: Copies all data, including snapshots, from one volume to another. If the -S flag is used, the command copies all snapshots in the source volume to the destination volume. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot. If neither the -S nor -s flag is used in the command, the filer automatically creates a distinctively-named snapshot at the time the vol copy start command is executed and copies only that snapshot to the destination volume.

The source and destination volumes must either both be traditional volumes or both be flexible volumes. The vol copy command will abort if an attempt is made to copy between different volume types.

The source and destination volumes can be on the same filer or on different filers. If the source or destination volume is on a filer other than the one on which the vol copy start command was entered, specify the volume name in the filer_name:volume_name format.


Traditional Volume Operations (only)
adding (creating)vol|aggr create vol_name -v [-l language_code] [-f] [-m] [-n] [-v] [-t {raid4|raid_dp}] [-r raidsize] [-T disk-type] -R rpm] [-L] disk-list

## create traditional volume using aggr command 
aggr create tradvol1 -l en -t raid4 -d v5.26 v5.27

## create traditional volume using vol command
vol create tradvol1 -l en -t raid4 -d v5.26 v5.27

## Create traditional volume using 20 disks, each RAID group can have 10 disks 
vol create vol1 -r 10 20
additional disksvol add volname[-f][-n][-g ]{ ndisks[@size]|-d }

## add another disk to the already existing traditional volume
vol add tradvol1 -d v5.28
splittingaggr split
Scrubing (parity)

## The more new “aggr scrub ” command is preferred

vol scrub status [volname|plexname|groupname][-v]

vol scrub start [volname|plexname|groupname][-v]
vol scrub stop [volname|plexname|groupname][-v]

vol scrub suspend [volname|plexname|groupname][-v]
vol scrub resume [volname|plexname|groupname][-v]

Note: Print the status of parity scrubbing on the named traditional volume, plex or RAID group. If no name is provided, the status is given on all RAID groups currently undergoing parity scrubbing. The status includes a percent-complete as well as the scrub’s suspended status (if any). 

Verify (mirroring)

## The more new “aggr verify” command is preferred

## verify status 
vol verify status 

## start a verify operation
vol verify start [ aggrname ]

## stop a verify operation
vol verify stop [ aggrname ]

## suspend a verify operation
vol verify suspend [ aggrname ]

## resume a verify operation
vol verify resume [ aggrname ]

Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then
RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in
both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes
are made.


General Volume Operations (Traditional and FlexVol)
Displaying

vol status
vol status -v (verbose)

vol status -l (display language)

Remove (destroying)vol offline
vol destroy
Renamevol rename
onlinevol online
offlinevol offline
restrictvol restrict
decompressvol decompress status 
vol decompress start
vol decompress stop
Mirroring

vol mirror volname [-n][-v victim_volname][-f][-d ]

Note:
Mirrors the currently-unmirrored traditional volume volname, either with the specified set of disks or with the contents of another unmirrored traditional volume victim_volname, which will be destroyed in the process.

The vol mirror command fails if either the chosen volname or victim_volname are flexible volumes. Flexible volumes require that any operations having directly to do with their containing aggregates be handled via the new aggr command suite.

Change languagevol lang
Change maximum number of files## Display maximum number of files
maxfiles

## Change maximum number of files 
maxfiles
Change root volumevol options root
Media Scrub

vol media_scrub status [volname|plexname|groupname -s disk-name][-v]

Note: Prints the media scrubbing status of the named aggregate, volume, plex, or group. If no name is given, then
status is printed for all RAID groups currently running a media scrub. The status includes a
percent-complete and whether it is suspended.

Look at the following system options:

raid.media_scrub.enable on
raid.media_scrub.rate 600
raid.media_scrub.spares.enable on


Aggregate Commands
Displayingaggr status
aggr status -r 
aggr status [-v]
Check you have spare disksaggr status -s
Adding (creating)

## Syntax – if no option is specified then the defult is used 
aggr create [-f] [-m] [-n] [-t {raid0 |raid4 |raid_dp}] [-r raid_size] [-T disk_type] [-R rpm>] [-L] [-B {32|64}]

## create aggregate called newaggr that can have a maximum of 8 RAID groups 
aggr create newaggr -r 8 -d 8a.16 8a.17 8a.18 8a.19

## create aggregated called newfastaggr using 20 x 15000rpm disks 
aggr create newfastaggr -R 15000 20

## create aggrgate called newFCALaggr (note SAS and FC disks may bge used) 
aggr create newFCALaggr -T FCAL 15

Note:

-f = overrides the default behavior that does not permit disks in a plex to belong to different disk pools
-m = specifies the optional creation of a SyncMirror
-n = displays the results of the command but does not execute it
-r = maximum size (number of disks) of the RAID groups for this aggregate
-T = disk type ATA, SATA, SAS, BSAS, FCAL or LUN
-R = rpm which include 5400, 7200, 10000 and 15000

Remove(destroying)aggr offline
aggr destroy
Unremoving(undestroying)aggr undestroy
Renameaggr rename
Increase size## Syntax 
aggr add [-f] [-n] [-g {raid_group_name | new |all}]

## add an additonal disk to aggregate pfvAggr, use “aggr status” to get group name 
aggr status pfvAggr -r 
aggr add pfvAggr -g rg0 -d v5.25

## Add 4 300GB disk to aggregate aggr1 
aggr add aggr1 4@300
offlineaggr offline
onlineaggr online
restricted stateaggr restrict
Change an aggregate options

## to display the aggregates options
aggr options

## change a aggregates raid group
aggr options raidtype raid_dp

## change a aggregates raid size
aggr options raidsize 4

show space usageaggr show_space
Mirroraggr mirror
Split mirroraggr split
Copy from one agrregate to another## Obtain the status
aggr copy status

## Start a copy 
aggr copy start

## Abort a copy – obtain the operation number by using “aggr copy status” 
aggr copy abort

## Throttle the copy 10=full speed, 1=one-tenth full speed
aggr copy throttle  
Scrubbing (parity)

## Media scrub status 
aggr media_scrub status
aggr scrub status 

## start a scrub operation
aggr scrub start [ aggrname | plexname | groupname ]

## stop a scrub operation
aggr scrub stop [ aggrname | plexname | groupname ]

## suspend a scrub operation
aggr scrub suspend [ aggrname | plexname | groupname ]

## resume a scrub operation
aggr scrub resume [ aggrname | plexname | groupname ]

Note: Starts parity scrubbing on the named online aggregate. Parity scrubbing compares the data disks to the
parity disk(s) in their RAID group, correcting the parity disk’s contents as necessary. If no name is
given, parity scrubbing is started on all online aggregates. If an aggregate name is given, scrubbing is
started on all RAID groups contained in the aggregate. If a plex name is given, scrubbing is started on
all RAID groups contained in the plex.

Look at the following system options:

raid.scrub.duration 360
raid.scrub.enable on
raid.scrub.perf_impact low
raid.scrub.schedule

Verify (mirroring)

## verify status 
aggr verify status 

## start a verify operation
aggr verify start [ aggrname ]

## stop a verify operation
aggr verify stop [ aggrname ]

## suspend a verify operation
aggr verify suspend [ aggrname ]

## resume a verify operation
aggr verify resume [ aggrname ]

Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then
RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in
both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes
are made.

Media Scrub

aggr media_scrub status 

Note: Prints the media scrubbing status of the named aggregate, plex, or group. If no name is given, then
status is printed for all RAID groups currently running a media scrub. The status includes a
percent-complete and whether it is suspended.

Look at the following system options:

raid.media_scrub.enable on
raid.media_scrub.rate 600
raid.media_scrub.spares.enable on


Disk Commands
Display

disk show
disk show
disk_list
sysconfig -r
sysconfig -d

## list all unnassigned/assigned disks
disk show -n
disk show -a

Adding (assigning)## Add a specific disk to pool1 the mirror pool 
disk assign -p 1
## Assign all disk to pool 0, by default they are assigned to pool 0 if the “-p” 
## option is not specififed 
disk assign all -p 0
Remove (spin down disk)disk remove
Reassigndisk reassign -d
Replacedisk replace start
disk replace stop
Note: uses Rapid RAID Recovery to copy data from the specified file system to the specified spare disk, you can stop this process using the stop command
Zero spare disksdisk zero spares
fail a diskdisk fail
Scrub a diskdisk scrub start
disk scrub stop
Sanitizedisk sanitize start
disk sanitize abort
disk sanitize status
disk sanitize release
Note: the release modifies the state of the disk from sanitize to spare. Sanitize requires a license. 
Maintanencedisk maint start -d
disk maint abort
disk maint list
disk maint status
Note: you can test the disk using maintain mode
swap a diskdisk swap
disk unswap
Note: it stalls all SCSI I/O until you physically replace or add a disk, can used on SCSI disk only.
Statisicsdisk_stat
Simulate a pulled diskdisk simpull
Simulate a pushed diskdisk simpush -l
disk simpush  
## Example
ontap1> disk simpush -l
The following pulled disks are available for pushing:
                         v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448

ontap1> disk simpush v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448


Storage Commands
Displaystorage show adapter
storage show disk [-a|-x|-p|-T] 
storage show expander
storage show fabric 
storage show fault
storage show hub
storage show initiators
storage show mc
storage show port
storage show shelf
storage show switch
storage show tape [supported]
storage show acp
storage array show
storage array show-ports
storage array show-luns
storage array show-config
Enablestorage enable adapter
Disablestorage disable adapter
Rename switchstorage rename
Remove portstorage array remove-port -p
Load Balancestorage load balance
Power Cyclestorage power_cycle shelf -h
storage power_cycle shelf start -c
storage power_cycle shelf completed


Statistical Information
Systemstats show system
Processorstats show processor
Diskstats show disk
Volumestats show volume
LUNstats show lun
Aggregatestats show aggregate
FCstats show fcp
iSCSIstats show iscsi
CIFSstats show cifs
Networkstats show ifnet


Environment Information
General informationenvironment status
Disk enclosures (shelves)environment shelf [adapter]
environment shelf_power_status
Chassisenvironment chassis all 
environment chassis list-sensors 
environment chassis Fans
environment chassis CPU_Fans
environment chassis Power
environment chassis Temperature
environment chassis [PS1|PS2]


System Configuration
General informationsysconfig
sysconfig -v
sysconfig -a (detailed)
Configuration errorssysconfig -c
Display disk devicessysconfig -d 
sysconfig -A
Display Raid group informationsysconfig -V
Display arregates and plexessysconfig -r
Display tape devicessysconfig -t
Display tape librariessysconfig -m


Startup and Shutdown
Boot Menu

1) Normal Boot.
2) Boot without /etc/rc.
3) Change password.
4) Clean configuration and initialize all disks.
5) Maintenance mode boot.
6) Update flash from backup config.
7) Install new software first.
8) Reboot node.
Selection (1-8)?

  • Normal Boot – continue with the normal boot operation
  • Boot without /etc/rc – boot with only default options and disable some services
  • Change Password – change the storage systems password
  • Clean configuration and initialize all disks – cleans all disks and reset the filer to factory default settings
  • Maintenance mode boot – file system operations are disabled, limited set of commands
  • Update flash from backup config – restore the configuration information if corrupted on the boot device
  • Install new software first – use this if the filer does not include support for the storage array
  • Reboot node – restart the filer
startup modes
  • boot_ontap – boots the current Data ONTAP software release stored on the boot device
  • boot primary – boots the Data ONTAP release stored on the boot device as the primary kernel
  • boot_backup – boots the backup Data ONTAP release from the boot device
  • boot_diags – boots a Data ONTAP diagnostic kernel

Note: there are other options but NetApp will provide these as when necessary

shutdown

halt [-t ] [-f]

-t = shutdown after minutes specified
-f = used with HA clustering, means that the partner filer does not take over

restartreboot [-t ] [-s] [-r] [-f]

-t = reboot in specified minutes
-s = clean reboot but also power cycle the filer (like pushing the off button)
-r = bypasses the shutdown (not clean) and power cycles the filer
-f = used with HA clustering, means that the partner filer does not take over


Network Interfaces

Displayifconfig -a
ifconfig
IP addressifconfig e0  
ifconfig e0a

# Remove a IP Address
ifconfig e3 0
subnet maskifconfig e0a netmask
broadcastifconfig e0a broadcast
media typeifconfig e0a mediatype 100tx-fd
maximum transmission unit (MTU)ifconfig e8 mtusize 9000
Flow control

ifconfig

# example
ifconfig e8 flowcontrol none

Note: value is the flow control type. You can specify the following values for the flowcontrol option:

none    – No flow control
receive – Able to receive flow control frames
send    – Able to send flow control frames
full    – Able to send and receive flow control frames

The default flowcontrol type is full.

trustedifconfig e8 untrusted

Note: You can specify whether a network interface is trustworthy or untrustworthy. When you specify an interface as untrusted (untrustworthy), any packets received on the interface are likely to be dropped.
HA Pair

ifconfig e8 partner  

## You must enable takeover on interface failures by entering the following commands:
options cf.takeover.on_network_interface_failure enable
ifconfig interface_name {nfo|-nfo}

nfo   — Enables negotiated failover
-nfo  — Disables negotiated failover

Note: In an HA pair, you can assign a partner IP address to a network interface. The network interface takes over this IP address when a failover occurs

Alias# Create alias
ifconfig e0 alias 192.0.2.30

# Remove alias
ifconfig e0 -alias 192.0.2.30
Block/Unblock protocols# Block 
options interface.blocked.cifs e9
options interface.blocked.cifs e0a,e0b

# Unblock
options interface.blocked.cifs “”
Statsifstat
netstat

Note: there are many options to both these commands so I will leave to the man pages
bring up/down an interfaceifconfig up
ifconfig down