Storage Limits and System Configuration
Storage limits
snapshot max 255 per volume
qtree max 4995 per volume
vol max (64bit) 16TB per volume
flexvol max FAS2040: 200 & All other models: 500 per filer
flexvol min 20MB
flexvol vol (32-bit) 16TB
flexvol vol (64-bit) 100TB or Model-dependent
trad vol max 16TB per volume
aggr max (32-bit) 16TB
aggr max (64-bit) Model-dependent
aggr min size RAID-DP: 3 disks / RAID4: 2 disks
raid group max 150 per aggr
raid group max 400 per filer
max lun size 16TB per lun
steps:
1) enable sm on source & destination filer
source-filer> options snapmirror.enable
snapmirror.enable on
source-filer> options snapmirror.access
snapmirror.access legacy
2) snapmirror access
make sure destination filer has snapmirror access to the source filer.
source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
3) initializing a snapmirror relation
destination-filer> vol create demo_destination aggr01 100g
destination-filer> vol restrict demo_destination
4) monitoring the status
destination-filer> snapmirror status
snapmirror is on.
source destination state lag status
source-filer:demo_source destination-filer:demo_destination uninitialized – transferring (1690 mb done)
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree uninitialized – transferring (32 mb done)
5) snapmirror schedule
you can set a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron style frequency.
destination-filer> rdfile /etc/snapmirror.conf
source-filer:demo_source destination-filer:demo_destination – 0 * * * # this syncs every hour
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree – 0 21 * * # this syncs every 9:00 pm
User-specified Snapshot copy schedules
You can configure weekly, nightly, or hourly Snapshot copy schedules using the snap sched command.
Type | Description |
---|---|
Weekly | Snapshot copies every Sunday at midnight. |
Nightly | Snapshot copies every night at midnight, except when a weekly Snapshot copy is scheduled to occur at the same time. |
Hourly | Snapshot copies on the hour or at specified hours, except if a weekly or nightly Snapshot copy is scheduled to occur at the same time. |
Snapvault commands
Add license
Source filer – filer1
filer1> license add XXXXX
filer1> options snapvault.enable on
filer1> options snapvault.access host=svfiler
Destination filer – svfiler
svfiler> license add XXXXX
svfiler> options snapvault.enable on
svfiler> options snapvault.access host=filer1
Disabling destination filer snapshot sched
svfiler> snap sched demo_vault 0 0 0
Creating Initial backup:
On the destination filer execute the below commands to initiate the base-line transfer. The time taken to complete depends upon the size of data on the source qtree and the network bandwidth. Check “snapvault status” on source/destination filers for monitoring the base-line transfer progress.
svfiler> snapvault start -S filer1:/vol/datasource/qtree1 svfiler:/vol/demo_vault/qtree1
Creating backup schedules:
On source filer:
filer1> snapvault snap sched datasource sv_hourly 2@0-22
filer1> snapvault snap sched datasource sv_daily 2@23
filer1> snapvault snap sched datasource sv_weekly 2@21@sun
On snapvault filer:
If you don’t use the -x option, the secondary does not contact the primary and transfer the Snapshot copy. It just creates a snapshot copy of the destination volume.
svfiler> snapvault snap sched -x demo_vault sv_hourly 6@0-22
svfiler> snapvault snap sched -x demo_vault sv_daily 14@23@sun-fri
svfiler> snapvault snap sched -x demo_vault sv_weekly 6@23@sun
To check the snapvault status, use the command “snapvault status” either on source or destination filer. And to see the backups, do a “snap list” on the destination volume – that will give you all the backup copies, time of creation etc.
Interface Groups
Create (single-mode) | # To create a single-mode interface group, enter the following command: # To configure an IP address of 192.168.0.10 and a netmask of 255.255.255.0 on the singlemode interface group SingleTrunk1 # To specify the interface e1 as preferred |
Create ( multi-mode) | # To create a static multimode interface group, comprising interfaces e0, e1, e2, and e3 and using MAC # To create a dynamic multimode interface group, comprising interfaces e0, e1, e2, and e3 and using IP |
Create second level intreface group | # To create two interface groups and a second-level interface group. In this example, IP address load # To enable failover to a multimode interface group with higher aggregate bandwidth when one or more of Note: You can create a second-level interface group by using two multimode interface groups. Secondlevel interface groups enable you to provide a standby multimode interface group in case the primary multimode interface group fails. |
Create second level intreface group in a HA pair | # Use the following commands to create a second-level interface group in an HA pair. In this example, # IP-based load balancing is used for the multimode interface groups. # On StorageSystem1: ifgrp create multi Firstlev1 e1 e2 ifgrp create multi Firstlev2 e3 e4 ifgrp create single Secondlev1 Firstlev1 Firstlev2 # On StorageSystem2 : ifgrp create multi Firstlev3 e5 e6 ifgrp create multi Firstlev4 e7 e8 ifgrp create single Secondlev2 Firstlev3 Firstlev4 # On StorageSystem1: ifconfig Secondlev1 partner Secondlev2 # On StorageSystem2 : ifconfig Secondlev2 partner Secondlev1 |
Favoured/non-favoured interface | # select favoured interface # select a non-favoured interface |
Add | ifgrp add MultiTrunk1 e4 |
Delete | ifconfig MultiTrunk1 down ifgrp delete MultiTrunk1 e4 Note: You must configure the interface group to the down state before you can delete a network interface from the interface group |
Destroy | ifconfig ifgrp_name down Note: You must configure the interface group to the down state before you can delete a network interface |
Enable/disable a interface group | ifconfig ifgrp_name up ifconfig ifgrp_name down |
Status | ifgrp status [ifgrp_name] |
Stat | ifgrp stat [ifgrp_name] [interval] |
Diagnostic Tools
Useful options | |
Ping thottling | # Throttle ping options ip.ping_throttle.drop_level # Disable ping throttling options ip.ping_throttle.drop_level 0 |
Forged IMCP attacks | options ip.icmp_ignore_redirect.enable on Note: You can disable ICMP redirect messages to protect your storage system against forged ICMP redirect attacks. |
Useful Commands | |
netdiag | The netdiag command continuously gathers and analyzes statistics, and performs diagnostic tests. These diagnostic tests identify and report problems with your physical network or transport layers and suggest remedial action. |
ping | You can use the ping command to test whether your storage system can reach other hosts on your network. |
pktt | You can use the pktt command to trace the packets sent and received in the storage system’s network. |
File Access using NFS
Export Options | |||||||||||
actual= | Specifies the actual file system path corresponding to the exported file system path. | ||||||||||
anon=| | Specifies the effective user ID (or name) of all anonymous or root NFS client users that access the file system path. | ||||||||||
nosuid | Disables setuid and setgid executables and mknod commands on the file system path. | ||||||||||
ro | ro=clientid | Specifies which NFS clients have read-only access to the file system path. | ||||||||||
rw | rw=clientid | Specifies which NFS clients have read-write access to the file system path. | ||||||||||
root=clientid | Specifies which NFS clients have root access to the file system path. If you specify the root= option, you must specify at least one NFS client identifier. To exclude NFS clients from the list, prepend the NFS client identifiers with a minus sign (-). | ||||||||||
sec=sectype | Specifies the security types that an NFS client must support to access the file system path. To apply the security types to all types of access, specify the sec= option once. To apply the security types to specific types of access (anonymous, non-super user, read-only, read-write, or root), specify the sec= option at least twice, once before each access type to which it applies (anon, nosuid, ro, rw, or root, respectively). security types could be one of the following:
| ||||||||||
Examples | rw=10.45.67.0/24 | ||||||||||
Export Commands | |||||||||||
Displaying | exportfs | ||||||||||
create | # create export in memory and write to /etc/exports (use default options) # create export in memory and write to /etc/exports (use specific options) | ||||||||||
remove | # Memory only exportfs -u # Memory and /etc/exportfs exportfs -z | ||||||||||
export all | exportfs -a | ||||||||||
check access | exportfs -c 192.168.0.80 /vol/nfs1 | ||||||||||
flush | exportfs -f exportfs -f | ||||||||||
reload | exportfs -r | ||||||||||
storage path | exportfs -s | ||||||||||
Write export to a file | exportfs -w | ||||||||||
fencing | # Suppose /vol/vol0 is exported with the following export options: | ||||||||||
stats | nfsstat |
File Access using CIFS
Useful CIFS options | |
change the security style | options wafl.default_security_style {ntfs | unix | mixed} |
timeout | options cifs.idle_timeout time |
Performance | options cifs.oplocks.enable on Note: Under some circumstances, if a process has an exclusive oplock on a file and a second process attempts to open the file, the first process must invalidate cached data and flush writes and locks. The client must then relinquish the oplock and access to the file. If there is a network failure during this flush, cached write data might be lost. |
CIFS Commands | |
useful files | /etc/cifsconfig_setup.cfg /etc/usermap.cfs /etc/passwd /etc/cifsconfig_share.cfg Note: use “rdfile” to read the file |
CIFS setup | cifs setup Note: you will be prompted to answer a number of questions based on what requirements you need. |
start | cifs restart |
stop | cifs terminate # terminate a specific client cifs terminate | |
sessions | cifs sessions cifs sessions cifs sessions # Authentication cifs sessions -t # Changes cifs sessions -c # Security Info cifs session -s |
Broadcast message | cifs broadcast * “message” cifs broadcast “message” |
permissions | cifs access Note: rights can be Unix-style combinations of r w x – or NT-style “No Access”, “Read”, “Change”, and “Full Control” |
stats | cifs stat cifs stat cifs stat |
create a share | # create a volume in the normal way # then using qtrees set the style of the volume {ntfs | unix | mixed} # Now you can create your share |
change share characteristics | cifs shares -change sharename {-browse | -nobrowse} {-comment desc | – nocomment} {-maxusers userlimit | -nomaxusers} {-forcegroup groupname | -noforcegroup} {-widelink | -nowidelink} {-symlink_strict_security | – nosymlink_strict_security} {-vscan | -novscan} {-vscanread | – novscanread} {-umask mask | -noumask {-no_caching | -manual_caching | – auto_document_caching | -auto_program_caching} # example cifs shares -change -novscan |
home directories | # Display home directories cifs homedir # Add a home directory wrfile -a /etc/cifs_homedir.cfg /vol/TEST # check it rdfile /etc/cifs_homedir.cfg # Display for a Windows Server net view \\ # Connect net use * \\192.168.0.75\TEST Note: make sure the directory exists |
domain controller | # add a domain controller # delete a domain controller # List the preferred controllers |
change filers domain password | cifs changefilerpwd |
Tracing permission problems | sectrace add [-ip ip_address] [-ntuser nt_username] [-unixuser unix_username] [-path path_prefix] [-a] #Examples # To remove |
FCP service | |
display | fcp show adapter -v |
daemon status | fcp status |
start | fcp start |
stop | fcp stop |
stats | fcp stats -i interval [-c count] [-a | adapter] fcp stats -i 1 |
target expansion adapters | fcp config [down|up] fcp config 4a down |
target adapter speed | fcp config speed [auto|1|2|4|8] fcp config 4a speed 8 |
set WWPN # | fcp portname set [-f] adapter wwpn fcp portname set -f 1b 50:0a:09:85:87:09:68:ad |
swap WWPN # | fcp portname swap [-f] adapter1 adapter2 fcp portname swap -f 1a 1b |
change WWNN | # display nodename fcp nodename 50:0a:09:80:82:02:8d:ff Note: The WWNN of a storage system is generated by a serial number in its NVRAM, but it is stored ondisk. If you ever replace a storage system chassis and reuse it in the same Fibre Channel SAN, it is possible, although extremely rare, that the WWNN of the replaced storage system is duplicated. In this unlikely event, you can change the WWNN of the storage system. |
WWPN Aliases – display | fcp wwpn-alias show |
WWPN Aliases – create | fcp wwpn-alias set [-f] alias wwpn fcp wwpn-alias set my_alias_1 10:00:00:00:c9:30:80:2f |
WWPN Aliases – remove | fcp wwpn-alias remove [-a alias … | -w wwpn] fcp wwpn-alias remove -a my_alias_1 |
iSCSI commands | |
display | iscsi initiator show iscsi session show [-t] iscsi connection show -v iscsi security show |
status | iscsi status |
start | iscsi start |
stop | iscsi stop |
stats | iscsi stats |
nodename | iscsi nodename # to change the name iscsi nodename |
interfaces | iscsi interface show iscsi interface enable e0b iscsi interface disable e0b |
portals | iscsi portal show Note: Use the iscsi portal show command to display the target IP addresses of the storage system. The storage system’s target IP addresses are the addresses of the interfaces used for the iSCSI protocol |
accesslists | iscsi interface accesslist show Note: you can add or remove interfaces from the list |
LUN configuration | |
Display | lun show lun show -m lun show -v |
Initialize/Configure LUNs, mapping | lun setup Note: follow the prompts to create and configure LUN’s |
Create | lun create -s 100m -t windows /vol/tradvol1/lun1 |
Destroy | lun destroy [-f] /vol/tradvol1/lun1 Note: the “-f” will force the destroy |
Resize | lun resize lun resize /vol/tradvol1/lun1 75m |
Restart block protocol access | lun online /vol/tradvol1/lun1 |
Stop block protocol access | lun offline /vol/tradvol1/lun1 |
Map a LUN to an initiator group | lun map /vol/tradvol1/lun1 win_hosts_group1 0 Note: use “-f” to force the mapping |
Remove LUN mapping | lun show -m lun offline /vol/tradvol1 lun unmap /vol/tradvol1/lun1 win_hosts_group1 0 |
Displays or zeros read/write statistics for LUN | lun stats /vol/tradvol1/lun1 |
Comments | lun comment /vol/tradvol1/lun1 “10GB for payroll records” |
Check all lun/igroup/fcp settings for correctness | lun config_check -v |
Manage LUN cloning | # Create a Snapshot copy of the volume containing the LUN to be cloned by entering the following command # Create the LUN clone by entering the following command |
Show the maximum possible size of a LUN on a given volume or qtree | lun maxsize /vol/tradvol1 |
Move (rename) LUN | lun move /vol/tradvol1/lun1 /vol/tradvol1/windows_lun1 |
Display/change LUN serial number | lun serial -x /vol/tradvol1/lun1 |
Manage LUN properties | lun set reservation /vol/tradvol1/hpux/lun0 |
Configure NAS file-sharing properties | lun share { none | read | write | all } |
Manage LUN and snapshot interactions | lun snap usage -s |
Quota Commands | |
Quotas configuration file | /mroot/etc/quotas |
Example quota file | ## hard limit | thres |soft limit ##Quota Target type disk files| hold |disk file ##------------- ----- ---- ----- ----- ----- ---- |
Displaying | quota report [] |
Activating | quota on [-w] |
Deactivitating | quota off [-w] |
Reinitializing | quota off [-w] quota on [-w] |
Resizing | quota resize Note: this commands rereads the quota file |
Deleting | edit the quota file quota resize |
log messaging | quota logmsg |
QTree Commands | |
Display | qtree status [-i] [-v] Note: The -i option includes the qtree ID number in the display. The -v option includes the owning vFiler unit, if the MultiStore license is enabled. |
adding (create) | ## Syntax – by default wafl.default_qtree_mode option is used qtree create path [-m mode] ## create a news qtree in the /vol/users volume using 770 as permissions qtree create /vol/users/news -m 770 |
Remove | rm -Rf |
Rename | mv |
convert a directory into a qtree directory | ## Move the directory to a different directory mv /n/joel/vol1/dir1 /n/joel/vol1/olddir ## Create the qtree qtree create /n/joel/vol1/dir1 ## Move the contents of the old directory back into the new QTree mv /n/joel/vol1/olddir/* /n/joel/vol1/dir1 ## Remove the old directory name rmdir /n/joel/vol1/olddir |
stats | qtree stats [-z] [vol_name] Note: -z = zero stats |
Change the security style | ## Syntax ## Change the security style of /vol/users/docs to mixed |
Deduplication Commands | |
start/restart deduplication operation | sis start -s sis start -s /vol/flexvol1 ## Use previous checkpoint sis start -sp |
stop deduplication operation | sis stop |
schedule deduplication | sis config -s
|
enabling | sis on |
disabling | sis off |
status | sis status -l |
Display saved space | df -s |
FlexClone Commands | |
Display | vol status vol status -v df -Lh |
adding (create) | ## Syntax vol clone create clone_name [-s {volume|file|none}] -b parent_name [parent_snap] ## create a flexclone called flexclone1 from the parent flexvol1 vol clone create flexclone1 -b flexvol1 |
Removing (destroy) | vol offline vol destroy |
splitting | ## Determine the free space required to perform the split vol clone split estimate ## Double check you have the space df -Ah ## Perform the split vol clone split start ## Check up on its status vol colne split status ## Stop the split vol clone split stop |
log file | /etc/log/clone The clone log file records the following information: • Cloning operation ID • The name of the volume in which the cloning operation was performed • Start time of the cloning operation • End time of the cloning operation • Parent file/LUN and clone file/LUN names • Parent file/LUN ID • Status of the clone operation: successful, unsuccessful, or stopped and some other details |
FlexVol Volume Operations (only) | |
Adding (creating) | ## Syntax ## Create a 200MB volume using the english character set |
additional disks | ## add an additional disk to aggregate flexvol1, use “aggr status” to get group name aggr status flexvol1 -r aggr add flexvol1 -g rg0 -d v5.25 |
Resizing | vol size [+|-] n{k|m|g|t} ## Increase flexvol1 volume by 100MB vol size flexvol1 + 100m |
Automatically resizing | vol autosize vol_name [-m size {k|m|g|t}] [-I size {k|m|g|t}] on ## automatically grow by 10MB increaments to max of 500MB vol autosize flexvol1 -m 500m -I 10m on |
Determine free space and Inodes | df -Ah df -I |
Determine size | vol size |
automatic free space preservation | vol options try_first [volume_grow|snap_delete] Note: |
display a FlexVol volume’s containing aggregate | vol container |
Cloning | vol clone create clone_vol [-s none|file|volume] -b parent_vol [parent_snap] Note: The vol clone create command creates a flexible volume named clone_vol on the local filer that is a clone of a “backing” flexible volume named par_ent_vol. A clone is a volume that is a writable snapshot of another volume. Initially, the clone and its parent share the same storage; more storage space is consumed only as one volume or the other changes. |
Copying | vol copy start [-S|-s snapshot] ## Example – Copies the nightly snapshot named nightly.1 on volume vol0 on the local filer to the volume vol0 on remote ## filer named toaster1. Note: Copies all data, including snapshots, from one volume to another. If the -S flag is used, the command copies all snapshots in the source volume to the destination volume. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot. If neither the -S nor -s flag is used in the command, the filer automatically creates a distinctively-named snapshot at the time the vol copy start command is executed and copies only that snapshot to the destination volume. |
Traditional Volume Operations (only) | |
adding (creating) | vol|aggr create vol_name -v [-l language_code] [-f] [-m] [-n] [-v] [-t {raid4|raid_dp}] [-r raidsize] [-T disk-type] -R rpm] [-L] disk-list ## create traditional volume using aggr command aggr create tradvol1 -l en -t raid4 -d v5.26 v5.27 ## create traditional volume using vol command vol create tradvol1 -l en -t raid4 -d v5.26 v5.27 ## Create traditional volume using 20 disks, each RAID group can have 10 disks vol create vol1 -r 10 20 |
additional disks | vol add volname[-f][-n][-g ]{ ndisks[@size]|-d } ## add another disk to the already existing traditional volume vol add tradvol1 -d v5.28 |
splitting | aggr split |
Scrubing (parity) | ## The more new “aggr scrub ” command is preferred |
Verify (mirroring) | ## The more new “aggr verify” command is preferred Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then |
General Volume Operations (Traditional and FlexVol) | |
Displaying | vol status vol status -l (display language) |
Remove (destroying) | vol offline vol destroy |
Rename | vol rename |
online | vol online |
offline | vol offline |
restrict | vol restrict |
decompress | vol decompress status vol decompress start vol decompress stop |
Mirroring | vol mirror volname [-n][-v victim_volname][-f][-d ] Note: |
Change language | vol lang |
Change maximum number of files | ## Display maximum number of files maxfiles ## Change maximum number of files maxfiles |
Change root volume | vol options root |
Media Scrub | vol media_scrub status [volname|plexname|groupname -s disk-name][-v] Look at the following system options: |
Aggregate Commands | |
Displaying | aggr status aggr status -r aggr status [-v] |
Check you have spare disks | aggr status -s |
Adding (creating) | ## Syntax – if no option is specified then the defult is used ## create aggregated called newfastaggr using 20 x 15000rpm disks Note: -f = overrides the default behavior that does not permit disks in a plex to belong to different disk pools |
Remove(destroying) | aggr offline aggr destroy |
Unremoving(undestroying) | aggr undestroy |
Rename | aggr rename |
Increase size | ## Syntax aggr add [-f] [-n] [-g {raid_group_name | new |all}] ## add an additonal disk to aggregate pfvAggr, use “aggr status” to get group name aggr status pfvAggr -r aggr add pfvAggr -g rg0 -d v5.25 ## Add 4 300GB disk to aggregate aggr1 aggr add aggr1 4@300 |
offline | aggr offline |
online | aggr online |
restricted state | aggr restrict |
Change an aggregate options | ## to display the aggregates options |
show space usage | aggr show_space |
Mirror | aggr mirror |
Split mirror | aggr split |
Copy from one agrregate to another | ## Obtain the status aggr copy status ## Start a copy aggr copy start ## Abort a copy – obtain the operation number by using “aggr copy status” aggr copy abort ## Throttle the copy 10=full speed, 1=one-tenth full speed aggr copy throttle |
Scrubbing (parity) | ## Media scrub status Note: Starts parity scrubbing on the named online aggregate. Parity scrubbing compares the data disks to the Look at the following system options: raid.scrub.duration 360 |
Verify (mirroring) | ## verify status Note: Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then |
Media Scrub | aggr media_scrub status Look at the following system options: |
Disk Commands | |
Display | disk show ## list all unnassigned/assigned disks |
Adding (assigning) | ## Add a specific disk to pool1 the mirror pool disk assign -p 1 ## Assign all disk to pool 0, by default they are assigned to pool 0 if the “-p” ## option is not specififed disk assign all -p 0 |
Remove (spin down disk) | disk remove |
Reassign | disk reassign -d |
Replace | disk replace start disk replace stop Note: uses Rapid RAID Recovery to copy data from the specified file system to the specified spare disk, you can stop this process using the stop command |
Zero spare disks | disk zero spares |
fail a disk | disk fail |
Scrub a disk | disk scrub start disk scrub stop |
Sanitize | disk sanitize start disk sanitize abort disk sanitize status disk sanitize release Note: the release modifies the state of the disk from sanitize to spare. Sanitize requires a license. |
Maintanence | disk maint start -d disk maint abort disk maint list disk maint status Note: you can test the disk using maintain mode |
swap a disk | disk swap disk unswap Note: it stalls all SCSI I/O until you physically replace or add a disk, can used on SCSI disk only. |
Statisics | disk_stat |
Simulate a pulled disk | disk simpull |
Simulate a pushed disk | disk simpush -l disk simpush ## Example ontap1> disk simpush -l The following pulled disks are available for pushing: v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448 ontap1> disk simpush v0.16:NETAPP__:VD-1000MB-FZ-520:14161400:2104448 |
Storage Commands | |
Display | storage show adapter storage show disk [-a|-x|-p|-T] storage show expander storage show fabric storage show fault storage show hub storage show initiators storage show mc storage show port storage show shelf storage show switch storage show tape [supported] storage show acp storage array show storage array show-ports storage array show-luns storage array show-config |
Enable | storage enable adapter |
Disable | storage disable adapter |
Rename switch | storage rename |
Remove port | storage array remove-port -p |
Load Balance | storage load balance |
Power Cycle | storage power_cycle shelf -h storage power_cycle shelf start -c storage power_cycle shelf completed |
Statistical Information | |
System | stats show system |
Processor | stats show processor |
Disk | stats show disk |
Volume | stats show volume |
LUN | stats show lun |
Aggregate | stats show aggregate |
FC | stats show fcp |
iSCSI | stats show iscsi |
CIFS | stats show cifs |
Network | stats show ifnet |
Environment Information | |
General information | environment status |
Disk enclosures (shelves) | environment shelf [adapter] environment shelf_power_status |
Chassis | environment chassis all environment chassis list-sensors environment chassis Fans environment chassis CPU_Fans environment chassis Power environment chassis Temperature environment chassis [PS1|PS2] |
System Configuration | |
General information | sysconfig sysconfig -v sysconfig -a (detailed) |
Configuration errors | sysconfig -c |
Display disk devices | sysconfig -d sysconfig -A |
Display Raid group information | sysconfig -V |
Display arregates and plexes | sysconfig -r |
Display tape devices | sysconfig -t |
Display tape libraries | sysconfig -m |
Startup and Shutdown | |
Boot Menu | 1) Normal Boot.
|
startup modes |
Note: there are other options but NetApp will provide these as when necessary |
shutdown | halt [-t ] [-f] -t = shutdown after minutes specified |
restart | reboot [-t ] [-s] [-r] [-f] -t = reboot in specified minutes -s = clean reboot but also power cycle the filer (like pushing the off button) -r = bypasses the shutdown (not clean) and power cycles the filer -f = used with HA clustering, means that the partner filer does not take over |
Network Interfaces
Display | ifconfig -a ifconfig |
IP address | ifconfig e0 ifconfig e0a # Remove a IP Address ifconfig e3 0 |
subnet mask | ifconfig e0a netmask |
broadcast | ifconfig e0a broadcast |
media type | ifconfig e0a mediatype 100tx-fd |
maximum transmission unit (MTU) | ifconfig e8 mtusize 9000 |
Flow control | ifconfig Note: value is the flow control type. You can specify the following values for the flowcontrol option: |
trusted | ifconfig e8 untrusted Note: You can specify whether a network interface is trustworthy or untrustworthy. When you specify an interface as untrusted (untrustworthy), any packets received on the interface are likely to be dropped. |
HA Pair | ifconfig e8 partner nfo — Enables negotiated failover Note: In an HA pair, you can assign a partner IP address to a network interface. The network interface takes over this IP address when a failover occurs |
Alias | # Create alias ifconfig e0 alias 192.0.2.30 # Remove alias ifconfig e0 -alias 192.0.2.30 |
Block/Unblock protocols | # Block options interface.blocked.cifs e9 options interface.blocked.cifs e0a,e0b # Unblock options interface.blocked.cifs “” |
Stats | ifstat netstat Note: there are many options to both these commands so I will leave to the man pages |
bring up/down an interface | ifconfig up ifconfig down |