Using zfs (basics)

To use zfs, you need to create at least one zpool first.

After that, you should have something like this:

# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
zfstest                23.8G     91K   23.8G     0%  ONLINE     -


This zpool "zfstest" also has one incorporated zfs filesystem on it. To manipulate zfs there is the "zfs" command. So keep in mind: zpool manipulates pool storage, zfs manipulates zfs generation and options. Try this:

# zfs list
NAME      USED  AVAIL  REFER  MOUNTPOINT
zfstest    88K  23.4G  24.5K  /zfstest



As you can see, the pool "zfstest" also has a filesystem on it, mounted automatically at mountpoint /zfstest.

You may create a new filesystem by using "zfs create":

# zfs create zfstest/king
# zfs list
NAME           USED  AVAIL  REFER  MOUNTPOINT
zfstest        118K  23.4G  25.5K  /zfstest
zfstest/king  24.5K  23.4G  24.5K  /zfstest/king


New filesystems within a pool are always named "poolname/filesystemname". Without any additional options, it will also mount automatically on "/poolname/filesystemname".

Let's create another one:

# zfs create zfstest/queen
# zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
zfstest         147K  23.4G  25.5K  /zfstest
zfstest/king   24.5K  23.4G  24.5K  /zfstest/king
zfstest/queen  24.5K  23.4G  24.5K  /zfstest/queen


We see some differences between old-fashioned filesystems and zfs: Usable storage is shared among all filesystems in a pool. "zfstest/king" has 23.4G available, "zfstest/queen" also, as does the master pool filesystem "zfstest".

So why create filesystems then? Couldn't we just use subdirectories in our master pool filesystem "zfstest" (mounted on /zfstest)?

The "trick" about zfs filesystems is the possibility to assign options to them, so they can be treated differently. We will see that later.

First, let's push some senseless data on our newly created filesystem:

# dd if=/dev/zero bs=128k count=5000 of=/zfstest/king/bigfile
5000+0 records in
5000+0 records out


This command creates a file "bigfile" in directory /zfstest/king, consisting of 5000 times 128 kilobytes. That's big enough for our purpose.

"zfs list" reads:

# zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
zfstest         625M  22.8G  27.5K  /zfstest
zfstest/king    625M  22.8G   625M  /zfstest/king
zfstest/queen  24.5K  22.8G  24.5K  /zfstest/queen


625 megabytes are used from filesystem zfstest/king, as expected. Notice also that now every other filesystem on that pool only can allocate 22.8G, as 625M are taken (compare with 23.4 G above, before creating that big file).

You CAN look up free space in your zfs filesystems also doing a "df -k", but I wouldn't recommend it: You won't see snapshots and the numbers can be very big.

Example for our zpool "zfstest":

# df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/dsk/c0d0s0      14951508 5725184 9076809    39%    /
/devices                   0       0       0     0%    /devices
ctfs                       0       0       0     0%    /system/contract
[... lines omitted ...]
zfstest              24579072      27 23938789     1%    /zfstest
zfstest/king         24579072  640149 23938789     3%    /zfstest/king
zfstest/queen        24579072      24 23938789     1%    /zfstest/queen


So 22.8G are 23938789 bytes. Sun uses 1K=1024 bytes, 1M = 1024K, 1G = 1024M, 1T = 1024G. They're a computer company and not an ISO metric organization...

So let's try out first option: "quota".
As you can imagine, "quota" limits storage. You know that as nearly every mailbox provider do impose a quota on your storage, as do file space providers.
First: To set and get options, you need to use "zfs set" and "zfs get", respectively.

So here we define a quota on zfstest/queen:

# zfs set quota=5G zfstest/queen

Result:

# zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
zfstest         625M  22.8G  27.5K  /zfstest
zfstest/king    625M  22.8G   625M  /zfstest/king
zfstest/queen  24.5K  5.00G  24.5K  /zfstest/queen


Only 5G left to use at mountpoint /zfstest/queen. Note, that you may still gobble up 22.8G in /zfstest/king, making it impossible then to put 5G in /zfstest/queen. So a quota does not guarantee any storage, it only limits it.

To guarantee a certain amount of storage, use the option "reservation":

# zfs set reservation=5G zfstest/queen

Now we simulated a classical "partition" - we reserved the same amount of storage as the quota implies, 5G:

# zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
zfstest        5.61G  17.8G  27.5K  /zfstest
zfstest/king    625M  17.8G   625M  /zfstest/king
zfstest/queen  24.5K  5.00G  24.5K  /zfstest/queen


The other filesystems only have 17.8G left, as 5 G are really reserved for zfstest/queen.

Now, let's try another nice option: compression
Perhaps now you are thinking about compression nightmares on windows systems, like doublespace, stacker and all these other parasital programs which killed performance, not storage. Forget them! zfs compression IS reliable and - fast!
With todays' CPU power the effect of compressing and decompressing objects is a charm and won't harm significantly your overall performance - it can boost performance as you will need less i/o due to compression.
As with many other zfs options, changing the compression only affects newly written files/sectors. Uncompressed blocks still can be read. It's transparent to the application. fseek() et.al. do not even notice that files are compressed.

# zfs set compression=on zfstest/queen

Now, compression is activated on /zfstest/queen (as "zfstest/queen" is mounted on /zfstest/queen, we did not change the mountpoint - and yes, you're right, the mountpoint is also just another zfs option...).

Let's copy our "bigfile" from king to queen:

# cp /zfstest/king/bigfile /zfstest/queen

Ok THIS in unfair - as our file consists of only zeroes, zfs won't compress it, it only sets up a marker saying that 655360000 bytes of zeroes have to be generated. It is some kind of "benchmark" hook to get nice results and to avoid to waste space with "hole files":

# zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
zfstest        5.61G  17.8G  27.5K  /zfstest
zfstest/king    625M  17.8G   625M  /zfstest/king
zfstest/queen  24.5K  5.00G  24.5K  /zfstest/queen


No space needed in zfstest/queen... You may check it with "ls -las" (option "s" prints out the number of needed disk blocks to store the file):

# ls -las /zfstest/queen
total 7
   3 drwxr-xr-x   2 root     sys            3 Apr 23 06:17 .
   3 drwxr-xr-x   4 root     sys            4 Apr 23 06:05 ..
   1 -rw-r--r--   1 root     root     655360000 Apr 23 06:18 bigfile


One block. On our uncompressed king filesystem the situation is like that:

# ls -las /zfstest/king
total 1280257
   3 drwxr-xr-x   2 root     sys            4 Apr 23 06:19 .
   3 drwxr-xr-x   4 root     sys            4 Apr 23 06:05 ..
1280251 -rw-r--r--   1 root     root     655360000 Apr 23 06:10 bigfile


To be able to create a "real world" file, we will use the "zfs get all" command, to get ALL options of a zfs filesystem:

# zfs get all zfstest/queen
NAME           PROPERTY       VALUE                  SOURCE
zfstest/queen  type           filesystem             -
zfstest/queen  creation       Wed Apr 23  6:05 2008  -
zfstest/queen  used           24.5K                  -
zfstest/queen  available      5.00G                  -
zfstest/queen  referenced     24.5K                  -
zfstest/queen  compressratio  1.00x                  -
zfstest/queen  mounted        yes                    -
zfstest/queen  quota          5G                     local
zfstest/queen  reservation    5G                     local
zfstest/queen  recordsize     128K                   default
zfstest/queen  mountpoint     /zfstest/queen         default
zfstest/queen  sharenfs       off                    default
zfstest/queen  checksum       on                     default
zfstest/queen  compression    on                     local
zfstest/queen  atime          on                     default
zfstest/queen  devices        on                     default
zfstest/queen  exec           on                     default
zfstest/queen  setuid         on                     default
zfstest/queen  readonly       off                    default
zfstest/queen  zoned          off                    default
zfstest/queen  snapdir        hidden                 default
zfstest/queen  aclmode        groupmask              default
zfstest/queen  aclinherit     secure                 default
zfstest/queen  canmount       on                     default
zfstest/queen  shareiscsi     off                    default
zfstest/queen  xattr          on                     default


As you remark, the "compressratio" option (which is a read-only option, so you may only use "zfs get" and not "zfs set") gives the compression ratio of your filesystem, but our "zero file" does not count, so it remains 1.00x!).

So let's create another file now in your compressed queen filesystem:

# zfs get all zfstest/queen > /zfstest/queen/outputfile

Our file will use 3 disk blocks:

# ls -las /zfstest/queen
total 10
   3 drwxr-xr-x   2 root     sys            4 Apr 23 06:18 .
   3 drwxr-xr-x   4 root     sys            4 Apr 23 06:05 ..
   1 -rw-r--r--   1 root     root     655360000 Apr 23 06:18 bigfile
   3 -rw-r--r--   1 root     root        1598 Apr 23 06:18 outputfile


Let's copy it to our uncompressed king filesystem:

# cp /zfstest/queen/outputfile /zfstest/king/

Here it will use 5 blocks:

# ls -las /zfstest/king
total 1280262
   3 drwxr-xr-x   2 root     sys            4 Apr 23 06:19 .
   3 drwxr-xr-x   4 root     sys            4 Apr 23 06:05 ..
1280251 -rw-r--r--   1 root     root     655360000 Apr 23 06:10 bigfile
   5 -rw-r--r--   1 root     root        1598 Apr 23 06:19 outputfile


These were the basic steps to create zfs filesystems, but at least one command is missing: How do destroy filesystems? Use "zfs destroy":

# zfs destroy zfstest/king
# zfs destroy zfstest/queen


Note, that the filesystem must not be in use, otherwise it won't work (just like any unmount (umount) of a classical filesystem won't work when it's in use).

Note, you may NOT destroy "zfstest", because that's the master filesystem of your pool, destroy your pool if you want to get rid of it:

# zfs destroy zfstest     
cannot destroy 'zfstest': operation does not apply to pools
use 'zfs destroy -r zfstest' to destroy all datasets in the pool
use 'zpool destroy zfstest' to destroy the pool itself

0 TrackBacks

Listed below are links to blogs that reference this entry: Using zfs (basics).

TrackBack URL for this entry: southbrain.com/mt/mt-tb.cgi/115

Leave a comment

Search

Mon midi - blog français

spacer
About me
(Google+)

 

Pages

  • Pascal Gienger, Winterthur, Switzerland
  • articles
    • LSI ProFibre 4000R: anno 2002.
    • Me and my new desktop: Ubuntu 11.04 "Natty" with Unity
    • SATA-RAID and ZFS: Infortrend/ADVUNI OXYGENRAID: top or flop?
    • Sun StorageTek StorEdge 6140 FC/SATA array
  • software
    • Mandelbrot set as Java Applet
    • vhci_stat
  • tutorials
    • Creating and manipulating zpools (zfs)
    • Fiberchannel Solaris Part 1: Introduction
    • Fiberchannel Solaris Part 2: Configuring SAN and TCP/IP
    • Fiberchannel Solaris Part 3: SAN Diagnostics
    • Installing Redhat Enterprise Server 5 in VMWare with PVSCSI (paravirtual SCSI)
    • Installing Redhat Enterprise Server 5 under Sun xVM/Xen
    • Installing SLES 11 under Xen (Sun xVM)
    • Using zfs (basics)
  • uncommented
    • OpenSolaris snv_121 Xen boot (Sun xVM 3.4.2)
    • Redhat ES 5.4 Xen Boot (Sun xVM 3.4.2)
    • SLES 11 Xen boot (Sun xVM 3.4.2)
  • virtualization
    • 1: Why virtualization?
    • 2: Userspace-based virtualization (the easy way)
    • 3: Xen: Hypervisor-based virtualization
    • 4: ESX: Hypervisor-based virtualization
    • 5: Solaris Zones: A sharing approach
    • 6: Hybrid methods: KVM

Categories

  • Android (3)
  • Asterisk VoIP (7)
  • BSOD (1)
  • Cisco (2)
  • Cyrus IMAP (7)
  • Horde framework (1)
  • Ironport (1)
  • Java (1)
  • Linux (6)
    • Ubuntu (3)
  • Misc (36)
  • Perl (6)
  • Postfix (7)
  • Privacy (1)
  • SOGo (1)
  • Security (3)
  • Solaris (65)
  • Storage (6)
  • Travel (18)
  • Unforgotten (7)
  • Virtualization (28)
    • Sun xVM (21)
    • VirtualBox (5)
    • vmware (7)

Monthly Archives

  • January 2012 (7)
  • December 2011 (1)
  • October 2011 (2)
  • September 2011 (1)
  • August 2011 (6)
  • July 2011 (2)
  • May 2011 (2)
  • April 2011 (3)
  • October 2010 (1)
  • August 2010 (1)
  • July 2010 (2)
  • May 2010 (3)
  • April 2010 (1)
  • March 2010 (2)
  • February 2010 (3)
  • January 2010 (5)
  • December 2009 (8)
  • November 2009 (7)
  • October 2009 (7)
  • September 2009 (13)
  • August 2009 (9)
  • June 2009 (5)
  • May 2009 (1)
  • April 2009 (4)
  • March 2009 (3)
  • February 2009 (2)
  • January 2009 (1)
  • December 2008 (1)
  • November 2008 (3)
  • October 2008 (6)
  • September 2008 (2)
  • August 2008 (6)
  • July 2008 (6)
  • June 2008 (4)
  • May 2008 (8)
  • April 2008 (7)
  • March 2008 (6)
  • February 2008 (11)

Recent Entries

  • SOGo and Cyrus IMAP: 2.4 works well.
  • login account failure: No account present for user
  • The future of IT: CLOSED.
  • known_hosts in hash format - OpenSSH
  • Solaris 10 Minimal, getting started with SSH server
  • OpenSSL 1.0.0: New CApath hashes!
  • Google Earth clone from Nokia using WebGL
  • Merry Christmas and a happy new year!
  • Ubuntu 11.10 Oneiric Ocelot available
  • New Solaris 10 Kernel patch 147441-03 (x86), 147440-03 (SPARC)

August 2012

Sun Mon Tue Wed Thu Fri Sat
      1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.