I need to get a list of human readable du output.

However, du does not have a "sort by size" option, and piping to sort doesn't work with the human readable flag.

For example, running:

du | sort -n -r 

Outputs a sorted disk usage by size (descending):

du |sort -n -r
65108   .
61508   ./dir3
2056    ./dir4
1032    ./dir1
508     ./dir2

However, running it with the human readable flag, does not sort properly:

du -h | sort -n -r

508K    ./dir2
64M     .
61M     ./dir3
2.1M    ./dir4
1.1M    ./dir1

Does anyone know of a way to sort du -h by size?


As of GNU coreutils 7.5 released in August 2009, sort allows a -h parameter, which allows numeric suffixes of the kind produced by du -h:

du -hs * | sort -h

If you are using a sort that does not support -h, you can install GNU Coreutils. E.g. on an older Mac OS X:

brew install coreutils
du -hs * | gsort -h

From sort manual:

-h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G)

du | sort -nr | cut -f2- | xargs du -hs

There is an immensely useful tool I use called ncdu that is designed for finding those pesky high disk-usage folders and files, and removing them. It's console based, fast and light, and has packages on all the major distributions.

@Douglas Leeder, one more answer: Sort the human-readable output from du -h using another tool. Like Perl!

du -h | perl -e 'sub h{%h=(K=>10,M=>20,G=>30);($n,$u)=shift=~/([0-9.]+)(\D)/;
return $n*2**$h{$u}}print sort{h($b)<=>h($a)}<>;'

Split onto two lines to fit the display. You can use it this way or make it a one-liner, it'll work either way.


4.5M    .
3.7M    ./colors
372K    ./plugin
128K    ./autoload
100K    ./doc
100K    ./syntax

EDIT: After a few rounds of golf over at PerlMonks, the final result is the following:

perl -e'%h=map{/.\s/;99**(ord$&&7)-$`,$_}`du -h`;[email protected]{sort%h}'
du -k * | sort -nr | cut -f2 | xargs -d '\n' du -sh

As far as I can see you have three options:

  1. Alter du to sort before display.
  2. Alter sort to support human sizes for numerical sort.
  3. Post process the output from sort to change the basic output to human readable.

You could also do du -k and live with sizes in KiB.

For option 3 you could use the following script:

#!/usr/bin/env python

import sys
import re

sizeRe = re.compile(r"^(\d+)(.*)$")

for line in sys.stdin.readlines():
    mo = sizeRe.match(line)
    if mo:
        size = int(mo.group(1))
        if size < 1024:
            size = str(size)+"K"
        elif size < 1024 ** 2:
            size = str(size/1024)+"M"
            size = str(size/(1024 ** 2))+"G"

        print "%s%s"%(size,mo.group(2))
        print line

I've had that problem as well and I'm currently using a workaround:

du -scBM | sort -n

This will not produce scaled values, but always produce the size in megabytes. That's less then perfect, but for me it's better than nothing (or displaying the size in bytes).

Found this posting elsewhere. Therefore, this shell script will do what you want without calling du on everything twice. It uses awk to convert the raw bytes to a human-readable format. Of course, the formatting is slightly different (everything is printed to one decimal place precision).

du -B1 | sort -nr  |awk '{sum=$1;
for (x=1024**3; x>=1024; x/=1024){
        if (sum>=x) { printf "%.1f%s\t\t",sum/x,hum[x];print $2;break

Running this in my .vim directory yields:

4.4M            .
3.6M            ./colors
372.0K          ./plugin
128.0K          ./autoload
100.0K          ./syntax
100.0K          ./doc

(I hope 3.6M of color schemes isn't excessive.)

This version uses awk to create extra columns for sort keys. It only calls du once. The output should look exactly like du.

I've split it into multiple lines, but it can be recombined into a one-liner.

du -h |
  awk '{printf "%s %08.2f\t%s\n", 
    index("KMG", substr($1, length($1))),
    substr($1, 0, length($1)-1), $0}' |
  sort -r | cut -f2,3


  • BEGIN - create a string to index to substitute 1, 2, 3 for K, M, G for grouping by units, if there's no unit (the size is less than 1K), then there's no match and a zero is returned (perfect!)
  • print the new fields - unit, value (to make the alpha-sort work properly it's zero-padded, fixed-length) and original line
  • index the last character of the size field
  • pull out the numeric portion of the size
  • sort the results, discard the extra columns

Try it without the cut command to see what it's doing.

Here's a version which does the sorting within the AWK script and doesn't need cut:

du -h |
   awk '{idx = sprintf("%s %08.2f %s", 
         index("KMG", substr($1, length($1))),
         substr($1, 0, length($1)-1), $0);
         lines[idx] = $0}
    END {c = asorti(lines, sorted);
         for (i = c; i >= 1; i--)
           print lines[sorted[i]]}'

Here's an example that shows the directories in a more compact summarized form. It handles spaces in directory/filenames.

% du -s * | sort -rn | cut -f2- | xargs -d "\n" du -sh

53G  projects
21G  Desktop
7.2G VirtualBox VMs
3.7G db
3.3G SparkleShare
2.2G Dropbox
272M apps
47M  incoming
14M  bin
5.7M rpmbuild
68K  vimdir.tgz

sort files by size in MB

du --block-size=MiB --max-depth=1 path | sort -n

I've a simple but useful python wrapper for du called dutop. Note that we (the coreutils maintainers) are considering adding the functionality to sort to sort "human" output directly.

Got another one:

$ du -B1 | sort -nr | perl -MNumber::Bytes::Human=format_bytes -F'\t' -lane 'print format_bytes($F[0])."\t".$F[1]'

I'm starting to like perl. You might have to do a

$ cpan Number::Bytes::Human

first. To all the perl hackers out there: Yes, I know that the sort part can also be done in perl. Probably the du part, too.

This snippet was shameless snagged from 'Jean-Pierre' from http://www.unix.com/shell-programming-scripting/32555-du-h-sort.html. Is there a way I can better credit him?

du -k | sort -nr | awk '
     BEGIN {
        split("KB,MB,GB,TB", Units, ",");
        u = 1;
        while ($1 >= 1024) {
           $1 = $1 / 1024;
           u += 1
        $1 = sprintf("%.1f %s", $1, Units[u]);
        print $0;

Use the "-g" flag

 -g, --general-numeric-sort
              compare according to general numerical value

And on my /usr/local directory produces output like this:

$ du |sort -g

0   ./lib/site_ruby/1.8/rubygems/digest
20  ./lib/site_ruby/1.8/rubygems/ext
20  ./share/xml
24  ./lib/perl
24  ./share/sgml
44  ./lib/site_ruby/1.8/rubygems/package
44  ./share/mime
52  ./share/icons/hicolor
56  ./share/icons
112 ./share/perl/5.10.0/YAML
132 ./lib/site_ruby/1.8/rubygems/commands
132 ./share/man/man3
136 ./share/man
156 ./share/perl/5.10.0
160 ./share/perl
488 ./share
560 ./lib/site_ruby/1.8/rubygems
604 ./lib/site_ruby/1.8
608 ./lib/site_ruby

Found this one on line... seems to work OK

du -sh * | tee /tmp/duout.txt | grep G | sort -rn ; cat /tmp/duout.txt | grep M | sort -rn ; cat /tmp/duout.txt | grep K | sort -rn ; rm /tmp/duout.txt

Another one:

du -h | perl -e'
@l{ K, M, G } = ( 1 .. 3 );
print sort {
    ($aa) = $a =~ /(\w)\s+/;
    ($bb) = $b =~ /(\w)\s+/;
    $l{$aa} <=> $l{$bb} || $a <=> $b
  } <>'

I learned awk from concocting this example yesterday. It took some time, but it was great fun, and I learned how to use awk.

It runs only du once, and it has a output much similar to du -h

du --max-depth=0 -k * | sort -nr | awk '{ if($1>=1024*1024) {size=$1/1024/1024; unit="G"} else if($1>=1024) {size=$1/1024; unit="M"} else {size=$1; unit="K"}; if(size<10) format="%.1f%s"; else format="%.0f%s"; res=sprintf(format,size,unit); printf "%-8s %s\n",res,$2 }'

It shows numbers below 10 with one decimal point.

Here is the simple method I use, very low resource usage and gets you what you need:

du --max-depth=1 | sort -n | awk 'BEGIN {OFMT = "%.0f"} {print $1/1024,"MB", $2}'

0 MB ./etc
1 MB ./mail
2 MB ./tmp
123 MB ./public_html

du -cka --max-depth=1 /var/log | sort -rn | head -10 | awk '{print ($1)/1024,"MB ", $2'}

If you need to handle spaces you can use the following

 du -d 1| sort -nr | cut -f2 | sed 's/ /\\ /g' | xargs du -sh

The additional sed statement will help alleviate issues with folders with names such as Application Support


command: ncdu

Directory navigation, sorting (name and size), graphing, human readable, etc...

Another awk solution -

du -k ./* | sort -nr | 
awk '
{x = 1;while ($1 >= 1024) 
{$1 = $1 / 1024;x = x + 1} $1 = sprintf("%-4.2f%s", $1, size[x]); print $0;}'

[jaypal~/Desktop/Reference]$ du -k ./* | sort -nr | awk '{split("KB,MB,GB",size,",");}{x = 1;while ($1 >= 1024) {$1 = $1 / 1024;x = x + 1} $1 = sprintf("%-4.2f%s", $1, size[x]); print $0;}'
15.92MB ./Personal
13.82MB ./Personal/Docs
2.35MB ./Work Docs
1.59MB ./Work Docs/Work
1.46MB ./Personal/Raa
584.00KB ./scan 1.pdf
544.00KB ./Personal/Resume
44.00KB ./Membership.xlsx
16.00KB ./Membership Transmittal Template.xlsx

Here is an example

du -h /folder/subfolder --max-depth=1 | sort -hr


233M    /folder/subfolder
190M    /folder/subfolder/myfolder1
15M     /folder/subfolder/myfolder4
6.4M    /folder/subfolder/myfolder5
4.2M    /folder/subfolder/myfolder3
3.8M    /folder/subfolder/myfolder2

You could also add | head -10 to find the top 10 or any number of sub-folders in the specified directory.


du -sk /var/log/* | sort -rn | awk '{print $2}' | xargs -ia du -hs "a"

I had been using the solution provided by @ptman, but a recent server change made it no longer viable. Instead, I'm using the following bash script:

# File: duf.sh
# list contents of the current directory by increasing 
#+size in human readable format

# for some, "-d 1" will be "--maxdepth=1"
du -k -d 1 | sort -g | awk '
    printf("%.0f KB\t%s",$1,$2);
else if($1<1024*1024)
    printf("%.1f MB\t%s",$1/1024,$2);
    printf("%.1f GB\t%s",$1/1024/1024,$2);

du -s * | sort -nr | cut -f2 | xargs du -sh

There are a lot of answers here, many of which are duplicates. I see three trends: piping through a second du call, using complicated shell/awk code, and using other languages.

Here is a POSIX-compliant solution using du and awk that should work on every system.

I've taken a slightly different approach, adding -x to ensure we stay on the same filesystem (I only ever need this operation when I'm short on disk space, so why weed out stuff I've mounted within this FS tree or moved and symlinked back?) and displaying constant units to make for easier visual parsing. In this case, I typically choose not to sort so I can better see the hierarchical structure.

sudo du -x | awk '
  $1 > 2^20 { s=$1; $1=""; printf "%7sG%s\n", sprintf("%.2f",s/2^21), $0 }'

(Since this is in consistent units, you can then append | sort -n if you really want sorted results.)

This filters out any directory whose (cumulative) content fails to exceed 512MB and then displays sizes in gigabytes. By default, du uses a 512-byte block size (so awk's condition of 220 blocks is 512MB and its 221 divisor converts the units to GB — we could use du -kx with $1 > 512*1024 and s/1024^2 to be more human-readable). Inside the awk condition, we set s to the size so we can remove it from the line ($0). This retains the delimiter (which is collapsed to a single space), so the final %s represents a space and then the aggregated directory's name. %7s aligns the rounded %.2f GB size (increase to %8s if you have >10TB).

Unlike most of the solutions here, this properly supports directories with spaces in their names (though every solution, including this one, will mishandle directory names containing line breaks).

At least with the usual tools, this will be hard because of the format the human-readable numbers are in (note that sort does a "good job" here as it sorts the numbers - 508, 64, 61, 2, 2 - it just can't sort floating point numbers with an additional multiplier).

I'd try it the other way round - use the output from "du | sort -n -r" and afterwards convert the numbers to human-readable format with some script or program.

What you can try is:

for i in `du -s * | sort -n | cut -f2`
  du -h $i;

Hope that helps.