NOTE:
This file contains additional notes about filters. The text here will be copied
verbose to the resulted generating manpage. If you add a section, you must
ensure that the encapsulatiing boundaries are maintained because the
make-filter-man.sh script greps for that. A blanko section for copy&paste:
*** SNIP START ***
---------------------->[ NAME.help ]
Description of filter NAME
<----------------------|
*** SNIP END ***
NAME is the basename of the filter filename. So if the filter library has the
name filter_foo.so, the basename of the filter is "foo".
If your filter supports the TC_FILTER_GET_CONFIG interface (which it should)
there is no point to repeat the description of the filter options.
---------------------->[ 32detect.help ]
This filter checks for interlaced video frames.
Subsequent de-interlacing with transcode can be enforced with 'force_mode' option
<----------------------|
---------------------->[ compare.help ]
Generate a file in with information about the times, frame, etc the pattern
defined in the image parameter is observed.
<----------------------|
---------------------->[ control.help ]
The format of the command file is framenumber followed by at least one whitespace followed
by the command followed by at least one whitespace followed by arguments for the command.
Empty lines and lines starting with a `#' are ignored. The frame numbers must be sorted ascending.
# Example file
# At frame 10 load the smooth filter
10 load smooth
# reconfigure at 20
20 configure smooth=strength=0.9
99 disable smooth
<----------------------|
---------------------->[ cpaudio.help ]
Copies audio from one channel to another
<----------------------|
---------------------->[ decimate.help ]
see /docs/README.Inverse.Telecine.txt
<----------------------|
---------------------->[ detectclipping.help ]
Detect black regions on top, bottom, left and right of an image. It is suggested that the filter is run for around 100 frames. It will print its detected parameters every frame. If you don't notice any change in the printout for a while, the filter probably won't find any other values. The filter converges, meaning it will learn.
<----------------------|
---------------------->[ denoise3d.help ]
What:
The denoise3d filter from mplayer (sibling of hqdn3d). Works very crude and
simple but also very fast. In fact it is even faster than the original from
mplayer as I managed to tweak some things (a.o. zero frame copying).
Who:
Everyone who wants to have their captured frames thoroughly denoised (i.e. who
want to encode to mpeg or mjpeg) but do not have enough processing power to
real-time encode AND use hqdn3d (better quality but a lot slower) or dnr (yet
slower), not to mention the other denoisers that are even slower. Quality is
really good for static scenes (if fed with the right parameters), moving
objects may show a little ghost-image (also depends on parameters) though. Your
milage may vary.
How:
Parameters are the same as the hqdn3d module, although in practice you'll not
end up with exactly the same values. Just experiment. Particular for this
version of the filter is that if you supply -1 to either component's parameters
(luma/chroma), that component will not have the filter applied to. If you're
still short on CPU cycles, try disabling the luma filter, this will not make
much difference in the effectiveness of the filter!
<----------------------|
---------------------->[ dnr.help ]
see /docs/filter_dnr.txt (german only)
<----------------------|
---------------------->[ doublefps.help ]
Converts interlaced video into progressive video with half the
original height and twice the speed (FPS), by converting each
interlaced field to a separate frame. Optionally allows the two
fields to be shifted by half a pixel each to line them up correctly
(at a significant expense of time).
<----------------------|
---------------------->[ fields.help ]
The 'fields' filter is designed to shift, reorder, and
generally rearrange independent fields of an interlaced
video input. Input retrieved from broadcast (PAL, NTSC,
etc) video sources generally comes in an interlaced form
where each pass from top to bottom of the screen displays
every other scanline, and then the next pass displays the
lines between the lines from the first pass. Each pass is
known as a "field" (there are generally two fields per
frame). When this form of video is captured and manipulated
digitally, the two fields of each frame are usually merged
together into one flat (planar) image per frame. This
usually produces reasonable results, however there are
conditions which can cause this merging to be performed
incorrectly or less-than-optimally, which is where this
filter can help.
The following options are supported for this filter
(they can be separated by colons):
shift - Shift the video by one field (half a frame),
changing frame boundaries appropriately. This is
useful if a video capture started grabbing video
half a frame (one field) off from where frame
boundaries were actually intended to be.
flip - Exchange the top field and bottom field of each
frame. This can be useful if the video signal was
sent "bottom field first" (which can happen
sometimes with PAL video sources) or other
oddities occurred which caused the frame
boundaries to be at the right place, but the
scanlines to be swapped.
flip_first
- Normally shifting is performed before flipping if
both are specified. This option reverses that
behavior. You should not normally need to use
this unless you have some extremely odd input
material, it is here mainly for completeness.
help - Print this text.
Note: the 'shift' function may produce slight color
discrepancies if YV12 is used as the internal transcode
video format (-V flag). This is because YV12 does not
contain enough information to do field shifting cleanly. For
best (but slower) results, use RGB mode for field shifting.
<----------------------|
---------------------->[ fps.help ]
options: :