Aller au contenu

Scripts Ng/d


gregus

Messages recommandés

Posté(e)

Salut,

je reviens avec toujours de nouvelles questions, il me reste un problème important que je rencontre avec l'utilisation de NZBGET pour les newsgroup

Nzbget vérifie et/ou répare les archives après les avoir téléchargées afin de pouvoir extraire les données

Mon problème est le suivant, lors de certains téléchargements, les fichiers (archives .rar) sont mal nommées et à la fin du téléchargement, quand nzbget lance la vérification et l'extraction, il me sort un message d'erreur en disant qu'il ne peut pas reconstruire le fichier téléchargé car il lui manque trop d'archives alors quelles sont bien présentes, mais mal nommées ...

je suis étonné que le logiciel ne sache pas résoudre ce problème, alors que par exemple quickpar sous windows gère très bien ce problème et renomme les fichiers en questions

Je suis tellement etonné que je pense que cela vient d'un problème de ma part, je me permets de joindre les deux fichiers (config de nzbget et le script unrar), si une personne voit l'option qui pourrait résoudre mon problème, je suis plus que prenneur

# Sample configuration file for nzbget

 #

 # On POSIX put this file to one of the following locations:

 # ~/.nzbget

 # /etc/nzbget.conf

 # /usr/etc/nzbget.conf

 # /usr/local/etc/nzbget.conf

 # /opt/etc/nzbget.conf

 #

 # On Windows put this file in program's directory.

 #

 # You can also put the file into any location, if you specify the path to it

 # using switch "-c", e.g:

 #   nzbget -c /home/user/myconig.txt


 # For quick start change the option MAINDIR and configure one news-server



 ##############################################################################

 ### PATHS																  ###


 # Root directory for all related tasks

 # MAINDIR is a variable and therefore starts with "$"

 # The value "~/download" is example for POSIX. On Windows use 

 # absolute paths, like "C:\Download".

 $MAINDIR=~/download


 # Destination-directory to store the downloaded files

 DestDir=${MAINDIR}/dst


 # Directory to monitor for incoming nzb-jobs.

 # Can have subdirectories (only one level of nesting). 

 # A nzb-file queued from a subdirectory will be automatically assigned to 

 # category with the directory-name.

 NzbDir=${MAINDIR}/nzb


 # Directory to store download queue

 QueueDir=${MAINDIR}/queue


 # Directory to store temporary files

 TempDir=${MAINDIR}/tmp


 # Lock-file for daemon-mode, contains process-id (PID) (POSIX only)

 LockFile=/tmp/nzbget.lock


 # Where to store log file, if it needs to be created (see "CreateLog")

 LogFile=${destdir}/nzbget.log



 ##############################################################################

 ### NEWS-SERVERS														   ###


 # This section defines which servers nzbget should connect to.

 # The servers will be ordered by their level, i.e. nzbget will at

 # first try to download an article from the level-0-server.

 # If that server fails, nzbget proceeds with the level-1-server, etc.

 # A good idea is surely to put your major download-server at level 0

 # and your fill-servers at levels 1,2,...

 # NOTE: Do not leave out a level in your server-list and start with level 0!

 # NOTE: Several servers with the same level may be used, they will have 

 # the same priority.


 # First server, on level 0

 # Level of newsserver

 Server1.Level=0

 # Host-name of newsserver

 Server1.Host=my1.newsserver.com

 # Port to connect to (default 119 if not specified)

 Server1.Port=119

 # Username to use for authentication

 Server1.Username=user

 # Password to use for authentication

 Server1.Password=pass

 # Server requires "Join Group"-command (yes, no) (default yes if not specified)

 Server1.JoinGroup=yes

 # Encrypted server connection (TLS/SSL) (yes, no)

 Server1.Encryption=no

 # Maximal number of simultaneous connections to this server

 Server1.Connections=4


 # Second server, on level 0

 #Server2.Level=0

 #Server2.Host=my2.newsserver.com

 #Server2.Port=119

 #Server2.Username=me

 #Server2.Password=mypass

 #Server2.JoinGroup=yes

 #Server2.Connections=4


 # Third server, on level 1

 #Server3.Level=1

 #Server3.Host=fills.newsserver.com

 #Server3.Port=119

 #Server3.Username=me2

 #Server3.Password=mypass2

 #Server3.JoinGroup=yes

 #Server3.Connections=1



 ##############################################################################

 ### PERMISSIONS (POSIX ONLY)											   ###


 # User name for daemon-mode (POSIX in daemon-mode only).

 # Set the user that the daemon normally runs at.

 # Set $MAINDIR with an absolute path to be sure where it will write.

 # This allows nzbget daemon to be launched in rc.local (at boot), and

 # download items as a specific user id.

 # NOTE: This option has effect only if the program was started from 

 # root-account, otherwise it is ignored and the daemon runs under 

 # current user id

 DaemonUserName=root


 # Specify default umask (affects file permissions) for newly created 

 # files (POSIX only).

 # The value should be written in octal form (the same as for "umask" shell 

 # command). If umask not specified (or a value greater than 0777 used, useful 

 # to disable current config-setting via command-line parameter) the umask-mode 

 # will not be set and current umask-mode (set via shell) will be used

 # NOTE: do not forget to uncomment the next line

 #UMask=022



 ##############################################################################

 ### DOWNLOAD QUEUE														 ###


 # Save download queue to disk. This allows to reload it on next start (yes, no)

 SaveQueue=yes


 # Reload download queue on start, if it exists (yes, no)

 ReloadQueue=yes


 # Reuse articles saved in temp-directory from previous program start (yes, no)

 # This allows to continue download of file, if program was exited before 

 # the file was completed.

 ContinuePartial=yes


 # Create subdirectory with category-name in destination-directory (yes, no)

 AppendCategoryDir=yes


 # Create subdirectory with nzb-filename in destination-directory (yes, no)

 AppendNzbDir=yes


 # How often incoming-directory (option "NzbDir") must be checked for new 

 # nzb-files, in seconds.

 # Value "0" disables the check.

 NzbDirInterval=5


 # How old nzb-file should at least be for it to be loaded to queue, in seconds.

 # Nzbget checks if nzb-file was not modified in last few seconds, defined by

 # this option. That safety interval prevents the loading of files, which 

 # were not yet completely saved to disk, for example if they are still being

 # downloaded in web-browser.

 NzbDirFileAge=60


 # Check for duplicate files (yes, no)

 # If this option is enabled the program checks by adding of a new nzb-file:

 # 1) if nzb-file contains duplicate entries. This check aims on detecting

 #	of reposted files (if first file was not fully uploaded);	

 #	If the program find two files with identical names, only the 

 #	biggest of these files will be added to queue;

 # 2) if download queue already contains file with the same name;

 # 3) if destination file on disk already exists.

 # In last two cases: if the file exists it will not be added to queue;

 # If this option is disabled, all files are downloaded and duplicate files 

 # are renamed to "filename_duplicate1".

 # Existing files are never deleted or overwritten.

 DupeCheck=no


 # Visibly rename broken files on download appending "_broken" (yes, no)

 # Do not activate this option if par-check is enabled.

 RenameBroken=no


 # Decode articles (yes, no)

 # yes - decode articles using internal decoder (supports yEnc and UU formats).

 # no - the articles will not be decoded and joined. External programs 

 #	  (like "uudeview") can be used to decode and join downloaded articles.

 #	  Also useful for debugging to look at article's source text.

 Decode=yes


 # Write decoded articles directly into destination output file (yes, no)

 # With this option enabled the program at first creates the output 

 # destination file with required size (total size of all articles), 

 # then writes on the fly decoded articles directly to the file 

 # without creating of any temporary files, even for decoded articles.

 # This may results in major performance improvement, but this higly 

 # depends on OS and filesystem used.

 # Can improve performance on a very fast internet connections, 

 # but you need to test if it works in your case.

 # INFO: Tests showed, that on Linux with EXT3-partition activating of 

 # this option results in up to 20% better performance, but on Windows with NTFS 

 # or Linux with FAT32-partitions the performance were decreased. 

 # The possible reason is that on EXT3-partition Linux can create large files

 # very fast (if the content of file does not need to be initialized), 

 # but Windows on NTFS-partition and also Linux on FAT32-partition need to

 # initialize created large file with nulls, resulting in a big performace 

 # degradation.

 # NOTE: for testing try to download few big files (with total size 500-1000MB)

 # and measure required time. Do not rely on the program's speed indicator.

 # NOTE: if both options "DirectWrite" and "ContinuePartial" are enabled,

 # the program will create empty articles-files in temp-directrory. They

 # are used to continue download of file on a next program start. To minimize

 # disk-io it is recommended to disable option "ContinuePartial", if 

 # "DirectWrite" is enabled. Especially on a fast connections (where you

 # would want to activate "DirectWrite") it should not be a problem to 

 # redownload the interrupted file.

 DirectWrite=no


 # Check CRC of downloaded and decoded articles (yes, no)

 # Normally this option should be enabled for better detecting of download 

 # errors. However checking of CRC needs about the same CPU time as 

 # decoding of articles. On a fast connections with slow CPUs disabling of

 # CPU-check may slightly improve performance (if CPU is a limiting factor).

 CrcCheck=yes


 # How much retries should be attempted if a download error occurs

 Retries=4


 # Set the interval between retries, in seconds

 RetryInterval=10


 # Redownload article if CRC-check fails (yes, no)

 # Helps to minimize number of broken files, but may be effective 

 # only if you have multiple download servers (even from the same provider

 # but from different locations (e.g. europe, usa)).

 # In any case the option increases your traffic.

 # For slow connections loading of extra par-blocks may be more effective

 # The option "CrcCheck" must be enabled for option "RetryOnCrcError" to work.

 RetryOnCrcError=no


 # Set connection timeout, in seconds

 ConnectionTimeout=60


 # Timeout until a download-thread is killed, in seconds

 # This can help on hanging downloads, but is dangerous. 

 # Do not use small values!

 TerminateTimeout=600


 # Set the (approximate) maximum number of allowed threads.

 # Sometimes under certain circumstances the program may create way to many 

 # download threads. Most of them are in wait-state. That is not bad,

 # but threads are usually a limited resource. If a program creates to many

 # of them, operating system may kill it. The option <ThreadLimit> prevents that.

 # NOTE 1: the number of threads is not the same as the number of connections

 # opened to NNTP-servers. Do not use the option <ThreadLimit> to limit the

 # number of connections. Use the appropriate options <ServerX.Connections>

 # instead.

 # NOTE 2: the actual number of created threads can be slightly larger as

 # defined by the option. Important threads may be created even if the

 # number of threads is exceeded. The option prevents only the creation of

 # additional download threads.

 # NOTE 3: in most cases you should leave the default value "100" unchanged.

 # However you may increase that value if you need more than 90 connections 

 # (that's very unlikely) or decrease the value if the OS does not allow so 

 # many threads. But the most OSes should not have problems with 100 threads.

 ThreadLimit=100


 # Set the maximum download rate in KB/s, "0" means no speed control

 DownloadRate=0


 # Set the size of memory buffer used by writing the articles, in Bytes.

 # Bigger values decrease disk-io, but increase memory usage.

 # Value "0" causes the OS-dependend default value to be used.

 # With value "-1" (which means "max/auto") the program sets the size of 

 # buffer according to the size of current article (typically less than 500K).

 # NOTE: the value must be written in bytes, do not use postfixes "K" or "M".

 # NOTE: to calculate the memory usage multiply WriteBufferSize by max number

 # of connections, configured in section "NEWS-SERVERS".

 # NOTE: typical article's size not exceed 500000 bytes, so using bigger values

 # (like several megabytes) will just waste memory.

 # NOTE: for desktop computers with large amount of memory value "-1" (max/auto)

 # is recommended, but for computers with very low memory (routers, NAS)

 # value "0" (default OS-dependend size) could be better alternative.

 # NOTE: write-buffer is managed by OS (system libraries) and therefore 

 # the effect of the option is highly OS-dependend.

 WriteBufferSize=0


 # Pause if disk space gets below this value, in MegaBytes.

 # Value "0" disables the check.

 # Only the disk space on the drive with "DestDir" is checked.

 # The drive with "TempDir" is not checked.

 DiskSpace=250



 ##############################################################################

 ### LOGGING																###


 # Create log file (yes, no)

 CreateLog=yes


 # Delete log file upon server start (only in server-mode) (yes, no)

 ResetLog=no


 # How various messages must be printed (screen, log, both, none)

 # Debug-messages can be printed only if the programm was compiled in 

 # debug-mode: "./configure --enable-debug"

 ErrorTarget=both

 WarningTarget=both

 InfoTarget=both

 DetailTarget=both

 DebugTarget=both


 # Number of messages stored in buffer and available for remote clients

 LogBufferSize=1000


 # Create a log of all broken files (yes ,no)

 # It is a text file placed near downloaded files, which contains

 # the names of broken files

 CreateBrokenLog=yes


 # Create memory dump (core-file) on abnormal termination (POSIX only) (yes, no)

 # Core-files are very helpful for debugging.

 # NOTE: core-files may contain sensible data, like your login/password to

 # newsserver etc.

 DumpCore=no


 # See also option "logfile" in secion "PATHS"



 ##############################################################################

 ### DISPLAY																###


 # Set screen-outputmode (loggable, colored, curses)

 # loggable - only messages will be printed to standard output;

 # colored  - prints messages (with simple coloring for messages categories)

 #			and download progress info; uses escape-sequenses to move cursor;

 # curses   - advanced interactive iterface with the ability to edit 

 #			download queue and variaous output options;

 OutputMode=curses


 # Shows NZB-Filename in file list in curses-outputmode (yes, no)

 # This option controls the initial state of curses-frontend,

 # it can be switched on/off in run-time with Z-key

 CursesNzbName=yes


 # Show files in groups (NZB-files) in queue list in curses-outputmode (yes, no)

 # This option controls the initial state of curses-frontend,

 # it can be switched on/off in run-time with G-key

 CursesGroup=no


 # Show timestamps in message list in curses-outputmode (yes, no)

 # This option controls the initial state of curses-frontend,

 # it can be switched on/off in run-time with T-key

 CursesTime=no


 # Update interval for Frontend-output in MSec (min value 25)

 # Bigger values reduce CPU usage (especially in curses-outputmode)

 # and network traffic in remote-client mode

 UpdateInterval=200



 ##############################################################################

 ### CLIENT/SERVER COMMUNICATION											###


 # Set the IP on which the server listen and which client uses to contact 

 # the server. It could be dns-hostname or ip-address (more effective since

 # does not require dns-lookup).

 # If you want the server to listen to all interfaces, use "0.0.0.0"

 ServerIp=127.0.0.1


 # Set the port which the server & client use

 ServerPort=6789


 # Set the password needed to succesfully queue a request

 ServerPassword=tegbzn6789


 # See also option "logbuffersize" in section "LOGGING"



 ##############################################################################

 ### PAR CHECK/REPAIR AND POSTPROCESSING									###


 # Reload Post-processor-queue on start, if it exists (yes, no)

 # For this option to work the options "SaveQueue" and "ReloadQueue" must

 # be also enabled.

 ReloadPostQueue=yes


 # How many par2-files to load (none, all, one)

 # none - all par2-files must be automatically paused

 # all - all par2-files must be downloaded

 # one - only one main par2-file must be dowloaded and other must be paused

 # Paused files remain in queue and can be unpaused by parchecker when needed

 LoadPars=one


 # Automatic par-verification (yes, no)

 # To download only needed par2-files (smart par-files loading) set also 

 # the option "loadpars" to "one". If option "loadpars" is set to "all",

 # all par2-files will be downloaded before verification and repair starts.

 # The option "renamebroken" must be set to "no", otherwise the par-checker

 # may not find renamed files and fail

 ParCheck=no


 # Automatic par-repair (yes, no)

 # If option "parcheck" is enabled and "parrepair" is not, the program

 # only verifies downloaded files and downloads needed par2-files, but does

 # not start repair-process. This is useful if the server does not have

 # enough CPU power, since repairing of large files may take too much

 # resources and time on a slow computers.

 # This option has effect only if the option "parcheck" is enabled

 ParRepair=yes


 # Use only par2-files with matching names (yes, no)

 # If par-check needs extra par-blocks it searches for par2-files

 # in download queue, which can be unpaused and used for restore. 

 # These par2-files should have the same base name as the main par2-file, 

 # currently loaded in par-checker. Sometimes extra par files (especially if 

 # they were uploaded by a different poster) have not matching names. 

 # Normally par-checker does not use these files, but you can allow it 

 # to use these files by setting "strictparname" to "no".

 # This has however a side effect: if NZB-file contains more than one collection

 # of files (with different par-sets), par-checker may download par-files from

 # a wrong collection. This increases you traffic (but not harm par-check).

 # NOTE: par-checker always uses only par-files added from the same NZB-file

 # and the option "strictparname" does not change this behavior

 StrictParName=yes


 # Pause download queue during check/repair (yes, no)

 # Enable the option to give CPU more time for par-check/repair. That helps

 # to speed up check/repair on slow CPUs with fast connection (e.g. NAS-devices).

 # NOTE: if parchecker needs additional par-files it temporary unpauses queue

 # NOTE: See also option <PostPauseQueue>.

 ParPauseQueue=no


 # Cleanup download queue after successful check/repair (yes, no)

 # Enable this option for automatic deletion of unneeded (paused) par-files 

 # from download queue after successful check/repair.

 # NOTE: before cleaning up the program checks if all paused files are par-files.

 # If there are paused non-par-files (this means that you have paused them

 # manually), the cleanup will be skipped for this collection.

 ParCleanupQueue=yes


 # Delete source nzb-file after successful check/repair (yes, no)

 # Enable this option for automatic deletion of nzb-file from incoming directory 

 # after successful check/repair.

 NzbCleanupDisk=no


 # Set path to program, that must be executed after the download of nzb-file 

 # or one collection in nzb-file (if par-check enabled and nzb-file contains 

 # multiple collections; see note below for the definition of "collection") 

 # is completed and possibly par-checked/repaired.

 # Arguments passed to that program:

 #  1 - path to destination dir, where downloaded files are located;

 #  2 - name of nzb-file processed;

 #  3 - name of par-file processed (if par-checked) or empty string (if not);

 #  4 - result of par-check:

 #	  0 - not checked: par-check disabled or nzb-file does not contain any

 #		  par-files;

 #	  1 - checked and failed to repair;

 #	  2 - checked and sucessfully repaired;

 #	  3 - checked and can be repaired but repair is disabled;

 #  5 - state of nzb-job:

 #	  0 - there are more collections in this nzb-file queued;

 #	  1 - this was the last collection in nzb-file;

 #  6 - indication of failed par-jobs for current nzb-file:

 #	  0 - no failed par-jobs;

 #	  1 - current par-job or any of the previous par-jobs for the

 #		  same nzb-files failed;

 #  7 - category assigned to nzb-file (can be empty string).

 #

 # NOTE: The parameter "state of nzb-job" is very important and MUST be checked 

 # even in the simplest scripts.

 # If par-check is enabled and nzb-file contains more than one collection

 # of files the postprocess-program is called after each collection is completed

 # and par-checked. If you want to unpack files or clean up the directory 

 # (delete par-files, etc.) there are two possibilities, when you can do this:

 #  1) you parse the "name of par-file processed" to find out the base name

 #	 of collection and clean up only files from this collection (not reliable,

 #	 because par-files sometimes have different names than rar-files);

 #  2) or you just check the parameters "state of nzb-job" and "indication of 

 #	 failed par-jobs" and do the processing, only if they are set to "1" 

 #	 (which means, that this was the last collection in nzb-file and all files 

 #	  are now completed) and to "0" (no failed par-jobs) respectively;

 # NOTE 2: if the option "ParCheck" is disabled nzbget calls PostProcess

 # only once, not after every collection, because the detection of collection

 # is disabled in this case;

 # NOTE 3: the term "collection" in the above description actually means 

 # "par-set". To determine what "collections" are present in nzb-file nzbget 

 # looks for par-sets. If any collection of files within nzb-file does 

 # not have any par-files, this collection will not be detected.

 # For example, for nzb-file containing three collections but only two par-sets, 

 # the postprocess will be called two times - after processing of each par-set.

 # NOTE 4: an example script for unrarring is provided within distribution 

 # in file <postprocess-example.sh>

 # NOTE 5: do not forget to uncomment the next line

 #PostProcess=~/myscript.sh


 # Allow multiple post-processing for the same nzb-file (yes,no)

 # After the post-processing (par-check and call of a postprocess-script) is

 # completed, nzbget adds the nzb-file to a list of completed-jobs. The nzb-file

 # stays in the list until the last file from that nzb-file is deleted from 

 # the download queue (it occurs straight away if the par-check was successful 

 # and the option "ParCleanupQueue" is enabled).

 # That means, if a paused file from a nzb-collection becomes unpaused 

 # (manually or from a post-process-script) after the collection was allready 

 # postprocessed nzbget will not post-process nzb-file again.

 # This prevents the unwanted multiple post-processings of the same nzb-file.

 # But it might be needed if the par-check/-repair are performed not directly 

 # by nzbget but from a post-process-script.

 # NOTE 1: enable this option only if you were advised to do that by the author

 # of the post-process-script

 # NOTE 2: by enabling "AllowReProcess" you should disable the option "ParCheck"

 # to prevent multiple par-checking

 AllowReProcess=no


 # Set the default message-kind for output received from postprocess-script 

 # (None, Detail, Info, Warning, Error, Debug).

 # NZBGet checks if the line written by the script to stdout or stderr starts

 # with special character-sequence, determining the message-kind, e.g.:

 # [INFO] bla-bla

 # [DETAIL] bla-bla

 # [WARNING] bla-bla

 # [ERROR] bla-bla

 # [DEBUG] bla-bla

 # If the message-kind was detected the text is added to log with detected type.

 # Otherwise the message becomes the default kind, specified in this option.

 PostLogKind=Detail


 # Pause download queue during executing of postprocess-script (yes, no)

 # Enable the option to give CPU more time for postprocess-script. That helps

 # to speed up postprocess on slow CPUs with fast connection (e.g. NAS-devices).

 # NOTE: See also option <ParPauseQueue>.

 PostPauseQueue=no


 ##############################################################################

 ### PERFORMANCE															###


 # On a very fast connection and slow CPU and/or drive the following 

 # settings may improve performance:

 # 1) Disable par-checking and -repairing ("ParCheck=no"). VERY important,

 #	because par-checking/repairing needs a lot of CPU-power and 

 #	significantly increases disk usage;

 # 2) Try to activate option "DirectWrite" ("DirectWrite=yes");

 # 3) Disable option "CrcCheck" ("CrcCheck=no");

 # 4) Disable option "ContinuePartial" ("ContinuePartial=no");

 # 5) Do not limit download rate ("DownloadRate=0"), because the bandwidth 

 #	throttling eats CPU time;

 # 6) Disable logging for info- and debug-messages ("InfoTarget=none", 

 #	"DebugTarget=none");

 # 7) Run the program in daemon (Posix) or service (Windows) mode and use

 #	remote client for short periods of time needed for controlling of 

 #	download process on server. Daemon/Service mode eats less CPU 

 #	resources due to not updating of output on screen.

 # 8) Increase the value of option "WriteBufferSize" or better set it to 

 #	"-1" (max/auto) if you have spare 5-20 MB of memory.
#!/bin/sh

 #

 # NZBGet post-process script

 # Script will unrar downloaded rar files, join ts-files and rename img-files to iso.

 #

 # Copyright (C) 2008 Peter Roubos <peterroubos@hotmail.com>

 # Copyright (C) 2008 Otmar Werner

 # Copyright (C) 2008 Andrei Prygounkov <hugbug@users.sourceforge.net>

 #

 # This program is free software; you can redistribute it and/or modify

 # it under the terms of the GNU General Public License as published by

 # the Free Software Foundation; either version 2 of the License, or

 # (at your option) any later version.

 #

 # This program is distributed in the hope that it will be useful,

 # but WITHOUT ANY WARRANTY; without even the implied warranty of

 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the

 # GNU General Public License for more details.

 #

 # You should have received a copy of the GNU General Public License

 # along with this program; if not, write to the Free Software

 # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.

 #

 #


 #######################	Settings section	 #######################


 # Set the full path to unrar if it is not in your PATH

 UnrarCmd=/opt/bin/unrar


 # Delete rar-files after unpacking (1, 0)

 DeleteRarFiles=1


 # Joint TS-files (1, 0)

 JoinTS=0


 # Rename img-files to iso (1, 0)

 RenameIMG=0


 ####################### End of settings section #######################



 # Parameters passed to script by nzbget:

 #  1 - path to destination dir, where downloaded files are located;

 #  2 - name of nzb-file processed;

 #  3 - name of par-file processed (if par-checked) or empty string (if not);

 #  4 - result of par-check:

 #	  0 - not checked: par-check disabled or nzb-file does not contain any

 #		  par-files;

 #	  1 - checked and failed to repair;

 #	  2 - checked and sucessfully repaired;

 #	  3 - checked and can be repaired but repair is disabled;

 #  5 - state of nzb-job:

 #	  0 - there are more collections in this nzb-file queued;

 #	  1 - this was the last collection in nzb-file;

 #  6 - indication of failed par-jobs for current nzb-file:

 #	  0 - no failed par-jobs;

 #	  1 - current par-job or any of the previous par-jobs for the

 #		  same nzb-files failed;


 DownloadDir="$1"

 NzbFile="$2"

 ParCheck=$4

 NzbState=$5

 ParFail=$6


 # Check if all is downloaded and repaired

 if [ "$#" -lt 6 ]

 then

		 echo "*** NZBGet post-process script ***"

		 echo "This script is supposed to be called from nzbget."

		 exit

 fi


 echo "[INFO] Unpack: Post-process script successfully started"


 # Check if all is downloaded and repaired

 if [ ! "$NzbState" -eq 1 ]

 then

		 echo "[INFO] Unpack: Not the last collection in nzb-file, exiting"

		 exit

 fi

 if [ ! "$ParCheck" -eq 2 ]

 then

		 echo "[WARNING] Unpack: Par-check failed or disabled, exiting"

		 exit

 fi

 if [ ! "$ParFail" -eq 0 ]

 then

		 echo "[WARNING] Unpack: Previous par-check failed, exiting"

		 exit

 fi


 # All OK, processing the files

 cd "$DownloadDir"


 # Make a temporary directory to store the unrarred files

 mkdir extracted


 # Remove the Par files

 echo "[INFO] Unpack: Deleting par2-files"

 rm *.[pP][aA][rR]2


 # Unrar the files (if any) to the temporary directory, if there are no rar files this will do nothing

 if (ls *.rar >/dev/null)

 then

	 echo "[INFO] Unpack: Unraring"

		 $UnrarCmd x -y -p- -o+ "*.rar"  ./extracted/

 fi


 if [ $JoinTS -eq 1 ]

 then

		 # Join any split .ts files if they are named xxxx.0000.ts xxxx.0001.ts

		 # They will be joined together to a file called xxxx.0001.ts

		 if (ls *.ts >/dev/null)

		 then

			 echo "[INFO] Unpack: Joining ts-files"

				 tsname=`find . -name "*0001.ts" |awk -F/ '{print $NF}'`

				 cat *0???.ts > ./extracted/$tsname

		 fi


		 # Remove all the split .ts files

	 echo "[INFO] Unpack: Deleting source ts-files"

		 rm *0???.ts

 fi


 # Remove the rar files

 if [ $DeleteRarFiles -eq 1 ]

 then

	 echo "[INFO] Unpack: Deleting rar-files"

		 rm *.r[0-9][0-9]

		 rm *.rar

		 rm *.s[0-9][0-9]

 fi


 # Go to the temp directory and try to unrar again.

 # If there are any rars inside the extracted rars then these will no also be unrarred

 cd extracted

 if (ls *.rar >/dev/null)

 then

	 echo "[INFO] Unpack: Calling unrar (second pass)"

		 $UnrarCmd x -y -p- -o+ "*.rar"


		 # Delete the Rar files

		 if [ $DeleteRarFiles -eq 1 ]

		 then

			 echo "[INFO] Unpack: Deleting rar-files (second pass)"

				 rm *.r[0-9][0-9]

				 rm *.rar

				 rm *.s[0-9][0-9]

		 fi

 fi


 # Move everything back to the Download folder

 mv * ..

 cd ..


 # Clean up the temp folder

 echo "[INFO] Unpack: Cleaning up"

 rmdir extracted

 chmod -R a+rw .

 rm *.nzb

 rm *.1

 rm .sfv

 rm _brokenlog.txt


 if [ $RenameIMG -eq 1 ]

 then

 # Rename img file to iso

 # It will be renamed to .img.iso so you can see that it has been renamed

		 if (ls *.img >/dev/null)

		 then

			 echo "[INFO] Unpack: Renaming img-files to iso"

				 imgname=`find . -name "*.img" |awk -F/ '{print $NF}'`

				 mv $imgname $imgname.iso

		 fi

 fi

merci par avance pour votre aide, car je n'arrive pas à trouver la solution

gregus

Posté(e)

Personne pour m'aider, je cherche partout et je ne trouve pas de solution, c'est fou :(

Salut,

Pour vérifier et réparer les fichiers tout en les renommant tu peux utiliser la commande par2 (à installer avec ipkg : ipkg install par2cmdline) :

par2 v fichier_par2 * : pour vérifier les fichiers (en étant dans leur répertoire)

par2 r fichier_par2 * : pour réparer les fichiers (en étant dans leur répertoire)

Le '*' est important, sans les fichiers portant d'autres noms que ceux précisés dans le par2 ne sont pas analysés.

En intégrant ces commandes dans le script de post-traitement (par exemple en les exécutant dans le cas ou nzbget renvoie le code 'checked and failed to repair'), tu devrais pouvoir t'en sortir.

A+

Pascal

Archivé

Ce sujet est désormais archivé et ne peut plus recevoir de nouvelles réponses.

×
×
  • Créer...

Information importante

Nous avons placé des cookies sur votre appareil pour aider à améliorer ce site. Vous pouvez choisir d’ajuster vos paramètres de cookie, sinon nous supposerons que vous êtes d’accord pour continuer.