View Single Post
Old 02-11-2012, 12:38 PM  
Barry-xlovecam
It's 42
 
Industry Role:
Join Date: Jun 2010
Location: Global
Posts: 18,083
Damn Duplicates -- p00f begone!

I wrote a fast file cleaner the grep use is interesting ...
Clean your lists up, etc. ...

Code:
#!/usr/bin/perl
####################################
# nodupes.cgi
#you may use free as-is with  no warranty
#make the outfile's chmod 666 if using the webserver
#chmod this script to 755
####################################
use CGI::Carp qw/fatalsToBrowser/;
use CGI qw/:standard/;

print "Content-type: text/html\n\n";

my $query = "$ENV{'QUERY_STRING'}";
	if ($query =~ s/[^a-zA-Z0-9\_]//g) {print qq~HUH???~;       exit;}

my $infile="somesiteurl.txt";
my $outfile="somesiteurlduped.txt";

   open(INPUT, "<", $infile) || die "infile not found\n";
      my @array=(<INPUT>);
   open(OUTPUT, ">>",$outfile )|| die "outfile not found\n";


         my %seen = ();
         my @unique = grep { ! $seen{ $_ }++ } @array;

            foreach my $unique(@unique){
                  chomp $unique;
                  print OUTPUT "$unique\n"
                    }

close OUTPUT;
close INPUT;

Last edited by Barry-xlovecam; 02-11-2012 at 12:39 PM..
Barry-xlovecam is offline   Share thread on Digg Share thread on Twitter Share thread on Reddit Share thread on Facebook Reply With Quote