PowerShell use xcopy, robocopy or copy-item -
the reason switching batch files powershell scripts improve error checking of process. cmdlet copying have advantages in regard?
if batch file exists uses xcopy copy files filename individually there advantage converting syntax copy-item?
what advantages of using robocopy, xcopy , copy-item (compared each other)? example robocopy have advantage when working large number of small files on reliable network. if script run simultaneously on hundreds of computers copy hundreds of files each of them affect decision? should decision focused on permissions of files?
the primary advantage can send objects copy-item
through pipe instead of strings or filespecs. do:
get-childitem '\\fileserver\photos\*.jpeg' -file | ` where-object { ($_.lastaccesstime -ge (get-date).adddays(-1)) -and ($_.length -le 500000) } | ` copy-item -destination '\\webserver\photos\'
that's kind of poor example (you copy-item -filter
), it's easy 1 come on-the-fly. it's pretty common when working files end pipeline get-childitem
, , tend lot because of -recurse -include
bug remove-item
.
you powershell's error trapping, special parameters -passthru
, -whatif
, -usetransaction
, , common parameters well. copy-item -recurse
can replicate of xcopy's tree copying abilities, it's pretty bare-bones.
now, if need maintain acls, ownership, auditing, , like, xcopy
or robocopy
going easier because functionality built in. i'm not sure how copy-item
handles copying encrypted files non-encrypted locations (xcopy has ability this), , don't believe copy-item
supports managing archive attribute directly.
if it's speed you're looking for, suspect xcopy , robocopy win out. managed code has higher overhead in general. xcopy , robocopy offer lot more control on how work network.
Comments
Post a Comment