Wednesday 24 March 2010

why is not possible to edit and reply to the comments in blogger??!!

I am having second thoughts about using blogger. I think that I will come back to wordpress where the edition tool is nicer and I can reply comments (they become nested), and I can edit a comment if I do a mistake.

html perl onliners

perl onlines html
http://www.catonmat.net/blog/perl-one-liners-explained-part-five/
65. URL-escape a string.

perl -MURI::Escape -le 'print uri_escape($string)'

You’ll need to install the URI::Escape module as it doesn’t come with Perl. The module exports two functions - uri_escape and uri_unescape. The first one does URL-escaping (sometimes also referred to as URL encoding), and the other does URL-unescaping (URL decoding).

66. URL-unescape a string.

perl -MURI::Escape -le 'print uri_unescape($string)'

This one-liner uses the uri_unescape function from URI::Escape module to do URL-unescaping.

67. HTML-encode a string.

perl -MHTML::Entities -le 'print encode_entities($string)'

This one-liner uses the encode_entities function from HTML::Entities module. This function encodes HTML entities. For example, < and > get turned into < and >.

68. HTML-decode a string.

perl -MHTML::Entities -le 'print decode_entities($string)'

This one-liner uses the decode_entities function from HTML::Entities module.

More links
============

perl unicode: combined vs full unicode graphemes


http://www.effectiveperlprogramming.com/blog/102


-----
use charnames ':full';

my $string =
"\N{LATIN SMALL LETTER A WITH DIAERESIS}"
. "\N{LATIN SMALL LETTER A}"
. "\N{COMBINING DIAERESIS}"
;
ää
------
my (@g) = $string =~ /(.)/g;

say scalar(@g); # 3
ä a¨
-----
my (@g) = $string =~ /(\X)/g;

say scalar(@g); # 2

ä ä

------

my $precomposed =
"\N{LATIN SMALL LETTER A WITH DIAERESIS}";

my $combined =
"\N{LATIN SMALL LETTER A}" .
"\N{COMBINING DIAERESIS}";
------

if ($precomposed eq $combined) {
say 'equal';
} else {
say 'unequal';
}
unequal
------
use Unicode::Normalize;

my $postcomposed = NFC($precomposed);

if ($precomposed eq $postcomposed) {
say 'equal';
} else {
say 'unequal';
}
equal


------
use open IO => ':utf8';

use open OUT => ':shiftjis';

use open IN => ':cp1251';

open my ($ofh), '>:utf8', $filename;

open my ($ifh), '<:encoding(iso-8859-1)', $filename;

------

But what about command line arguments?
use I18N::Langinfo qw(langinfo CODESET);
use Encode qw(decode);

my $codeset = langinfo(CODESET);

@ARGV = map { decode $codeset, $_ } @ARGV;
------


------

Basic exception handling in Perl

Basic exception handling in Perl

Basic try catch pattern in perl
http://www.perlfoundation.org/perl5/index.cgi?exception_handling

[copy-paste]
The most basic way to handle exceptions in Perl is to use an eval BLOCK and have $@ transmit the error.
eval {                          # try
    ...run the code here....
    1;
} or do {                       # catch
    ...handle the error using $@...
};

Because the eval block ends with a true statement, it will always return true if it succeeds and false otherwise.
$@ can be a simple string, reference or an object, and is "thrown" by die

Dangers of using $@

The more common, but subtly flawed, idiom is to check $@ to see if the eval failed.
# FLAWED!
eval {                           # try
    ...run the code here...
};
if( $@ ) {                       # catch
    ...handle the error using $@...
}

This method is essentially non-atomic and has proven finicky. The problem lies with using $@. As it is global and there are many things which can reset it, either intentionally or otherwise, between the eval block failing and when it is checked.
=====
Other blog entry about error handling

http://perliscope.blogspot.com/2009/11/perl-error-handling.html


=====
[update]
CPAN modules

Try::Tiny # simple and functinal
TryCatch # has more dependencies but is nicer than Try::Tiny

Error
Exception::Class
Other modules are useful for checking exception handling during testing:


========

== EDITED 2010-10-15==
See new discusion in perlmonks

 Best Practices for Exception Handling

        Some reasons I like exceptions:
        1. Robustness. I can forget to check for an returned error value. I cannot forget to check for an exception.
        2. Brevity. I prefer:




          $o->foo->bar->fribble->ni
          to




          $o->foo or return(ERROR_FOO); $o->bar or return(ERROR_BAR); $o->fribble or return(ERROR_FRIBBLE); $o->ni or return(ERROR_NI);
        3. Clarity. With exception based code the "normal" flow of control is more explicit because it is not obscured by error handling code. I think that the first of the two code examples above shows the intent of the code more directly than the second does.
        4. Separation of concerns. The error condition and the error handler are different ideas.




          • You may want an error to be handled in different ways depending on the context.
          • You may also not know how the error should be handled at the point it occurs.
          • You may not know how the error should be handled at the time you write the code.
          With the return-error-code style you end up having to either:




          • propogate error conditions to where the decision on how they should be handled can be made.
          • propogating error handlers down to where the errors may occur
          Both options rapidly become messy if there are many levels of code between the error condition and the error handler.
        5. No confusion between return values and error conditions.
        There are probably some more ;-)
Re: Re: Best Practices for Exception Handling
by IlyaM (Parson) on Jan 30, 2003 at 09:54 UTC The fact is that when you write code in modular fashion one part of your system cannot always know how to handle errors itself. In such cases the only thing you can do is pass error somewhere else and there are in general two ways to do it: exceptions and return codes. And exceptions is just a more robust way to do it.
Example: say you are implementing business logic for your application which has multiple frontends (CLI, web and GUI). This part of your application encounters an error (let say a database connection error). What should it do? Print HTML page with error? Produce plain text formated error message for CLI? Write something in the log? No, it is not responsiblity of this part of your system to do these things, it is responsiblity of the frontend part to handle this error. So you just raise an exception and let the frontend to handle it.

Do you agree that the logic responsible for asking users for a better input belongs to the user interface part and it is not a part of the business logic? I.e. business logic layer have to callback user interface part when it tries to do the error recovery. So you still have to pass control to the callee to do the error recovery that business logic part cannot do on its own as if you were using exception or return codes style of error handling. To me it looks that the error recovery mechanism is essantially the same in both cases. The only difference is added complexity of callback approach. Let's plot a couple of diagrams. Traditional approach (exceptions or return codes) for case when the business logic part bails with error:
UserInterface BusinessLogic | user input | |------------------------------>| | do some action | |<----------failure-------------| | handle the error | |
If at the last point the user interface part can handle the error it can either ask the business logic part to redo the action or if this error is unrecoverable print diagnostic error or do something else. Now callback approach:
UserInterface BusinessLogic UserInte +rface | user input | |------------------------------>| | do some action | |----------failure------------->| | handle th +e error | |
This diagram clearly show two problems:
  1. This design adds additional requirement on reentrability for at least the user interface and probably for other parts of the system if the error recovery callback calls them.
  2. If error recovery callback can handle the error then program flow is clear. It returns the control to the business logic part which in its turn returns the control to the callee, i.e. back to the user interface part. But what about another case when it cannot handle the error? You still have to return the control back using either exceptions or return codes! Why then bother with callbacks at all?

Personally I think the following little trick/modification makes for cleaner code... (and I think that raise_error makes more sense if it contains the error message returned...)
sub bar { my( $self, @args )= @_; eval { $self->method( @args ); 1} or $self->raise_error( $@, @_ ) and return undef; }
If you can contrive to make raise_error() return undef you can make it even cleaner
sub bar { my( $self, @args )= @_; eval { $self->method( @args ); 1} or return $self->raise_error( $@, @_ );
}

MAPPING files into memory vs READING them

Brian d Foy 'efective perler' blog talks about mapping files into memory to avoid IO and memory footprint,

Memory-map files instead of slurping them

It uses the module File::Map


use File::Map qw(map_file);

{
my $start = time;
map_file my $map, '/Volumes/Hercules/Red/revealing_it_all_big.mov';
my $loadtime = time - $start;
print "Loaded file in $loadtime seconds\n";
my $count = () = $map =~ /abc/;
print "Found $count occurances\n";
}

[copy paste from the blog]
The $map acts just like a normal Perl string, and you don’t have to worry about any of the mmap details. When the variable goes out of scope, the map is broken and your program doesn’t suffer from a large chunk of unused memory.
In Tim Bray’s Wide Finder contest to find the fatest way to process log files with “wider” rather than “faster” processors, the winning solution was a Perl implementation using mmap (although using the older Sys-Mmap). Perl had nothing special in that regard because most of the top solutions used mmap to avoid the I/O penalty.
The mmap is especially handy when you have to do this with several files at the same time (or even sequentially if Perl needs to find a chunk of contiguous memory). Since you don’t have the data in real memory, you can mmap as many files as you like and work with them simultaneously.
Also, since the data actually live on the disk, different programs running at the same time can share the data, including seeing the changes each program makes (although you have to work out the normal concurrency issues yourself). That is, mmap is a way to share memory.
The File::Map module can do much more too. It allows you to lock filehandles, and you can also synchronize access from threads in the same process.
If you don’t actually need the data in your program, don’t ever load it: mmap it instead.
[/end copy paste]

put perl under git and create branches for diferent module sets

Instead of having a core perl distribution and then put directories in your home for different sets of cpan-modules installations and deal with it changing @INC, other solution could be to put your perl installation under git control.

Manage your Perl modules with git

Read this Brian d Foy blog entry to find out how to do it and some advantages of this approach.

perl get-options good practices

Rading a recent post at  http://perlbuzz.com/ :
The horrible bug your command line Perl program probably has 

 It talks about the best practice to test always system call return values. But also make a good point that many user forget about to test the result of get-option. If you don't know what get_option is, and you are writing perl scripts for comand line usage, then you are missing a very important tool.


For the record, I am putting here my standard get_option scaffold (I have it as a template in my .emacs)

use Getopt::Long;  

my $prog = $0;
my $usage = <<eoq;
Usage for $0:

  >$prog [-test -help -verbose]

EOQ

my $help;
my $test;
my $debug;
my $verbose =1;
my $log;
my $stdout;
my $ok = GetOptions(
                    'test'      => \$test,
                    'debug:i'   => \$debug,
                    'verbose:i' => \$verbose,
                    'help'      => \$help,
                    'log'       => \$log,
                    'stdout'    => \$stdout,
                   );

if ($help || !$ok ) {
    print $usage;
    exit;
}

I am capturing the return value of the getoptions and printing a 'usage' message if error or help

I encourage you to put this in your editor's perl templates.

PERL: comparing if a set of variables have the same value

This post originates because I was reading a post from PerlMongers back in 2001

I posted here because the perlish way of using double negation logic in '!grep != ' or the trick to encode the variables to a string and check what it remains after removing a string. Sometimes this tricks would be useful (despite not pretty).

In response to draegtun comment:

Thanks for the Perl6::Junction hint. But probably I will go for List::MoreUtils
DB<8> use List::MoreUtils qw(any all)

DB<2> $a=1; $b=1;$c=1;$d=1;$e=1;$z=2

DB<9> (all{$_ ==$a} ($a,$b,$c,$d,$e, $z))? print "doing st because equal\n" : print "sorry not equal\n"
sorry not equal

DB<10> (all{$_ ==$a} ($a,$b,$c,$d,$e))? print "doing st because equal\n" : print "sorry not equal\n"
doing st because equal




 in reply to Re:
Re: Re: Fastest way to compare multiple variables?

in thread Fastest
way to compare multiple variables?

on May 15, 2001 at 21:10 UTC ( #80620=perlquestion: print w/ replies,
xml )

Anonymous Monk
has asked for the wisdom of the Perl Monks concerning the following
question:

Hi, Is there anyway (without using hashes) to do
something like
if ($a == $b == $c == $d) {
&do_something
}
What I have: 15-20 variables ALREADY have their values (integers)
What I need: tell if all those variables have a same value or not
What I know: there are already ways to do that by using arrays/hashes.
Or, hash is still the fastest way? Thank you...
TIMTIWODI 1:
Re:
Fastest way to compare multiple variables?

by Masem (Monsignor) on May 15, 2001 at
21:14 UTC

    TIMTOWODI:
    if ( !grep { $_ != $a } ($b, $c, $d, $e...) ) { &do_something }

    Dr. Michael K. Neylon - mneylon-pm@masemware.com || "You've left the lens cap of your mind on again, Pinky" - The Brain
TIMTIWODI 2: Merlin way:
on May 15, 2001 at 21:51 UTC ( #80645=note: print w/ replies,
xml )
use CGI;
@array1 = param('datalist1');
@array2 = param('datalist2');
@array3 = param('datalist3');
If you could make that instead:
my %data = map { $_ => [param $_] }
qw(datalist1 datalist2 datalist3);
Then we can compare their lengths
with:
sub compare {
my @lengths = map { scalar @{$data{$_}} } qw(datalist1 datalist2 dat
+alist3);
my $first = shift @lengths;
$first == $_ or return 0 for @lengths;
return 1;
}
See how much easier? Regularity in
variable names is almost always a sign that they should be part
of a larger structure instead. -- Randal L. Schwartz, Perl
hacker
TIMTIWODI 3: using s///g . Probably it is slower if you have a
lot of values but the idea of eliminating things and count how many
remain doing it in a string instead with a grep is 'another way of
doing it'
Try this:
#!/usr/local/bin/perl -w
use strict;
my @list=("abcd123","abcd143","abcd123","abcd123");
$_=join("",@list);
s/$list[0]//g;
print "not equal\n" if ($_);