Quote:
Originally Posted by Barry-xlovecam
(Post 18754251)
Code:
cat infile.txt|sort -u > outfile.txt
No spaces and the outfile
That is a lot easier fris, ty
|
awk way to do it, to remove dups without sorting
Code:
awk '!x[$0]++' file.txt
perl without sorting
Code:
perl -ne 'print if !$a{$_}++'
this would remove dupe entries on a file with a single column
awk
Code:
awk '{ if ($1 in stored_lines) x=1; else print; stored_lines[$1]=1 }' infile.txt > outfile.txt
perl
Code:
perl -ane 'print unless $x{$F[0]}++' infile > outfile
sunday gfy bonus
count and show duplicate file names
Code:
find . -type f |sed "s#.*/##g" |sort |uniq -c -d
extra bonus
fild duplicate files based on filesize, then md5 hash
Code:
find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
:pimp:pimp
|