marc c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
..
cmake c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
config c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
prog c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
src c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
CMakeLists.txt c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
Makefile.am c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
Makefile.in c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
README.html c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
aclocal.m4 c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
autobuild c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
configure c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
configure.ac c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
endiantest.c c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
lept.pc.in c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
leptonica-license.txt c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
make-for-auto c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
make-for-local c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
moller52.jpg c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
style-guide.txt c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前
version-notes.html c849a6f500 Tesseract Necessary for OCR Recognition. 8 年之前

README.html








Creative Commons License
This work is licensed under a Creative Commons Attribution 2.5 License.






/*====================================================================*
- Copyright (C) 2001 Leptonica. All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
- 1. Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- 2. Redistributions in binary form must reproduce the above
- copyright notice, this list of conditions and the following
- disclaimer in the documentation and/or other materials
- provided with the distribution.
-
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ANY
- CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
- NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*====================================================================*/

README (1.73: 25 Jan 2016)
---------------------------

gunzip leptonica-1.73.tar.gz
tar -xvf leptonica-1.73.tar







Building leptonica

I/O libraries leptonica is dependent on

Developing with leptonica

What's in leptonica?








Building leptonica




1. Top view

This tar includes:
(1) src: library source and function prototypes for building liblept
(2) prog: source for regression test, usage example programs, and
sample images
for building on these platforms:
- Linux on x86 (i386) and AMD 64 (x64)
- OSX (both powerPC and x86).
- Cygwin, msys and mingw on x86
There is an additional zip file for building with MS Visual Studio.

Libraries, executables and prototypes are easily made, as described below.

When you extract from the archive, all files are put in a
subdirectory 'leptonica-1.73'. In that directory you will
find a src directory containing the source files for the library,
and a prog directory containing source files for various
testing and example programs.

2. Building on Linux/Unix/MacOS

There are three ways to build the library:

(1) By customization: Use the existing static makefile,
src/makefile.static and customize the build by setting flags
in src/environ.h. See src/environ.h and src/makefile for details.
Note: if you are going to develop with leptonica, I encourage
you to use the static makefiles.

(2) Using autoconf (supported by James Le Cuirot).
Run ./configure in this directory to
build Makefiles here and in src. Autoconf handles the
following automatically:
* architecture endianness
* enabling Leptonica I/O image read/write functions that
depend on external libraries (if the libraries exist)
* enabling functions for redirecting formatted image stream
I/O to memory (on linux only)
After running ./configure: make; make install. There's also
a 'make check' for testing.

(3) Using cmake (supported by Egor Pugin).
The build must always be in a different directory from the root
of the source (here). It is common to build in a subdirectory
of the root. From here:
mkdir build
cd build
cmake ..
make
Alternatively, from here:
mkdir build
cmake -H . -Bbuild (-H means the source directory,
-B means the director for the build
make
To clean out the current build, just remove everything in
the build subdirectory.

In more detail:

(1) Customization using the static makefiles:

* FIRST THING: Run make-for-local. This simply renames
src/makefile.static --> src/makefile
prog/makefile.static --> prog/makefile
[Note: the autoconf build will not work if you have any files
named "makefile" in src or prog. If you've already run
make-for-local and renamed the static makefiles, and you then
want to build with autoconf, run make-for-auto to rename them
back to makefile.static.]

* You can customize for:
(a) Including Leptonica I/O functions that depend on external
libraries [use flags in src/environ.h]
(b) Adding functions for redirecting formatted image stream
I/O to memory [use flag in src/environ.h]
(c) Specifying the location of the object code. By default it
goes into a tree whose root is also the parent of the src
and prog directories. This can be changed using the
ROOT_DIR variable in makefile.

* Build the library:
- To make an optimized version of the library (in src):
make
- To make a debug version of the library (in src):
make DEBUG=yes debug
- To make a shared library version (in src):
make SHARED=yes shared
- To make the prototype extraction program (in src):
make (to make the library first)
make xtractprotos

* To use shared libraries, you need to include the location of
the shared libraries in your LD_LIBRARY_PATH.

* To make the programs in the prog directory, first make liblept
in src. Then in prog you can customize the makefile for linking
the external libraries:
Finally, do 'make' in the prog directory.

VERY IMPORTANT: the 240+ programs in the prog directory are
an integral part of this package. These can be divided into
four groups:
(1) Programs that are useful applications for running on the
command line. They can be installed from autoconf builds
using 'make install'. Examples of these are the PostScript
and pdf conversion programs: converttopdf, converttops,
convertfilestopdf, convertfilestops, convertsegfilestopdf,
convertsegfilestops, printimage and printsplitimage.
(2) Programs that are used as regression tests in alltests_reg.
These are named *_reg, and 66 of them are invoked together
(alltests_reg). The regression test framework has been
standardized, and regresstion tests are relatively easy
to write. See regutils.h for details.
(3) Other regression tests, some of which have not (yet) been
put into the framework. They are also named *_reg.
(4) Programs that were used to test library functions or
auto-generate library code. These are useful for testing
the behavior of small sets of functions, and for
providing example code.

(2) Building using autoconf (Thanks to James Le Cuirot)

Use the standard incantation, in the root directory (the
directory with configure):
./configure [build the Makefile]
make [builds the library and shared library versions of
all the progs]
make install [as root; this puts liblept.a into /usr/local/lib/
and 13 of the the progs into /usr/local/bin/ ]
make [-j2] check [runs the alltests_reg set of regression tests.
This works even if you build in a different
place from the distribution. The -j parameter
should not exceed half the number of cores.
If the test fails, just run with 'make check']

Configure also supports building in a separate directory from the
source. Run "/(path-to)/leptonica-1.73/configure" and then "make"
from the desired build directory.

Configure has a number of useful options; run "configure --help" for
details. If you're not planning to modify the library, adding the
"--disable-dependency-tracking" option will speed up the build. By
default, both static and shared versions of the library are built. Add
the "--disable-shared" or "--disable-static" option if one or the other
isn't needed. To skip building the programs, use "--disable-programs".

By default, the library is built with debugging symbols. If you do not
want these, use "CFLAGS=-O2 ./configure" to eliminate symbols for
subsequent compilations, or "make CFLAGS=-O2" to override the default
for compilation only. Another option is to use the 'install-strip'
target (i.e., "make install-strip") to remove the debugging symbols
when the library is installed.

Finally, if you find that the installed programs are unable to link
at runtime to the installed library, which is in /usr/local/lib,
try to run configure in this way:
LDFLAGS="-Wl,-rpath -Wl,/usr/local/lib" ./configure
which causes the compiler to pass those options through to the linker.

For the debian distribution, out of all the programs in the prog
directory, we only build a small subset of general purpose
utility programs. This subset is the same set of programs that
'make install' puts into /usr/local/bin. It has no dependency on
the image files that are bundled in the prog directory for testing.

(3) Using cmake

There are a couple of flags you can use on the cmake line to
determine what is built here.

* By default, cmake builds a shared library. To make a static
library:
cmake .. -DSTATIC=1

* By default, cmake only builds the library, not the programs.
To make progs from the build subdirectory:
cmake .. -DBUILD_PROG=1

(4) Cross-compiling for windows

You can use src/makefile.mingw for cross-compiling in linux.


3. Building on Windows

(a) Building with Visual Studio

Tom Powers has provided a set of developer notes and project files
for building the library and applications under windows with VC++ 2008:



http:///www.leptonica.org/vs2008doc/index.html


http:///www.leptonica.org/download.html#VS2008



He has also supplied a zip file that contains the entire 'lib'
and 'include' directories needed to build Windows-based programs
using static or dynamic versions of the leptonica library
(including static library versions of zlib, libpng, libjpeg,
libtiff, and giflib).



leptonica-1.68-win32-lib-include-dirs.zip



You can download Tom's vs2008 package either from the download
page or from code.google.com/p/leptonica.

(b) Building for mingw with MSYS
(Thanks to David Bryan)

MSYS is a Unix-compatible build environment for the mingw compiler.
Installing the "MSYS Base System" and "MinGW Compiler Suite" will allow
building the library with autoconf as in (2) above. It will also allow
building with the static makefile as in (1) above if this option
is added to the make command:

CC="gcc -D_BSD_SOURCE -DANSI"

Only the static library may be built this way; the autoconf method must
be used if a shared (DLL) library is desired.

External image libraries (see below) must be downloaded separately,
built, and installed before building the library. Pre-built libraries
are available from
ezwinports project.

(c) Building for Cygwin
(Thanks to David Bryan)

Cygwin is a Unix-compatible build and runtime environment. Installing
the "Base" and "Devel" packages, plus the desired graphics libraries
from the "Graphics" and "Libs" packages, will allow building the
library with autoconf as in (2) above. If the graphics libraries
are not present in the /lib, /usr/lib, or /usr/local/lib directories,
you must run make with the "LDFLAGS=-L/(path-to-image)/lib" option.
It will also allow building with the static makefile as in (1)
above if this option is added to the make command:

CC="gcc -ansi -D_BSD_SOURCE -DANSI"

Only the static library may be built this way; the autoconf method must
be used if a shared (DLL) library is desired.




I/O libraries leptonica is dependent on




Leptonica is configured to handle image I/O using these external
libraries: libjpeg, libtiff, libpng, libz, libgif, libwebp, libopenjp2

These libraries are easy to obtain. For example, using the
debian package manager:
sudo apt-get install
where = {libpng12-dev, libjpeg62-dev, libtiff4-dev}.

Leptonica also allows image I/O with bmp and pnm formats, for which
we provide the serializers (encoders and decoders). It also
gives output drivers for wrapping images in PostScript and PDF, which
in turn use tiffg4, jpeg and flate (i.e., zlib) encoding. PDF will
also wrap jpeg2000 images.

There is a programmatic interface to gnuplot. To use it, you
need only the gnuplot executable (suggest version 3.7.2 or later);
the gnuplot library is not required.

If you build with automake, libraries on your system will be
automatically found and used.

The rest of this section is for building with the static makefiles.
The entries in environ.h specify which of these libraries to use.
The default is to link to these four libraries:
libjpeg.a (standard jfif jpeg library, version 6b or 7, 8 or 9))
libtiff.a (standard Leffler tiff library, version 3.7.4 or later;
libpng.a (standard png library, suggest version 1.4.0 or later)
libz.a (standard gzip library, suggest version 1.2.3)
current non-beta version is 3.8.2)

These libraries (and their shared versions) should be in /usr/lib.
(If they're not, you can change the LDFLAGS variable in the makefile.)
Additionally, for compilation, the following header files are
assumed to be in /usr/include:
jpeg: jconfig.h
png: png.h, pngconf.h
tiff: tiff.h, tiffio.h

If for some reason you do not want to link to specific libraries,
even if you have them, stub files are included for the ten
different output formats:
bmp, jpeg, png, pnm, ps, pdf, tiff, gif, webp and jp2.
For example, if you don't want to include the tiff library,
in environ.h set:
#define HAVE_LIBTIFF 0
and the stubs will be linked in.

To read and write webp files:
(1) Download libwebp from sourceforge
(2) #define HAVE_LIBWEBP 1 (in environ.h)
(3) In prog/makefile, edit ALL_LIBS to include -lwebp
(4) The library will be installed into /usr/local/lib.
You may need to add that directory to LDFLAGS; or, equivalently,
add that path to the LD_LIBRARY_PATH environment variable.

To read and write jpeg2000 files:
(1) Download libopenjp2, version 2.X, from their distribution,
along with cmake. There is no debian version of openjpeg 2.X
as of 12/26/2014.
(2) #define HAVE_LIBJP2K 1 (in environ.h)
(2a) If you have version 2.X, X != 1, edit LIBJP2K_HEADER (in environ.h)
(3) In prog/makefile, edit ALL_LIBS to include -lopenjp2
(4) The library will be installed into /usr/local/lib.

To read and write gif files:
(1) Download version giflib-5.X.X from souceforge
(2) #define HAVE_LIBGIF 1 (in environ.h)
(3) In prog/makefile, edit ALL_LIBS to include -lgif
(4) The library will be installed into /usr/local/lib.
(5) Note: do not use giflib-4.1.4: binary comp and decomp
don't pack the pixel data and are ridiculously slow.




Developing with leptonica




You are encouraged to use the static makefiles if you are developing
applications using leptonica. The following instructions assume
that you are using the static makefiles and customizing environ.h.

1. Automatic generation of prototypes

The prototypes are automatically generated by the program xtractprotos.
They can either be put in-line into allheaders.h, or they can be
written to a file leptprotos.h, which is #included in allheaders.h.
Note: (1) We supply the former version of allheaders.h.
(2) all .c files simply include allheaders.h.

First, make xtractprotos:
make xtractprotos

Then to generate the prototypes and make allheaders.h, do one of
these two things:
make allheaders [puts everything into allheaders.h]
make allprotos [generates a file leptprotos.h containing the
function prototypes, and includes it in allheaders.h]

Things to note about xtractprotos, assuming that you are developing
in Leptonica and need to regenerate the prototypes in allheaders.h:

(1) xtractprotos is part of Leptonica. You can 'make' it in either
src or prog (see the makefile).
(2) You can output the prototypes for any C file to stdout by running:
xtractprotos or
xtractprotos -prestring=[string]
(3) The source for xtractprotos has been packaged up into a tar
containing just the Leptonica files necessary for building it
in linux. The tar file is available at:
www.leptonica.com/source/xtractlib-1.5.tar.gz

2. GNU runtime functions for stream redirection to memory

There are two non-standard gnu functions, fmemopen() and open_memstream(),
that only work on linux and conveniently allow memory I/O with a file
stream interface. This is convenient for compressing and decompressing
image data to memory rather than to file. Stubs are provided
for all these I/O functions. Default is to enable them; OSX developers
must disable by setting #define HAVE_FMEMOPEN 0 (in environ.h).
If these functions are not enabled, raster to compressed data in
memory is accomplished safely but through a temporary file.
See 9 for more details on image I/O formats.

If you're building with the autoconf programs, these two functions are
automatically enabled if available.

3. Typedefs

A deficiency of C is that no standard has been universally
adopted for typedefs of the built-in types. As a result,
typedef conflicts are common, and cause no end of havoc when
you try to link different libraries. If you're lucky, you
can find an order in which the libraries can be linked
to avoid these conflicts, but the state of affairs is aggravating.

The most common typedefs use lower case variables: uint8, int8, ...
The png library avoids typedef conflicts by altruistically
appending "png_" to the type names. Following that approach,
Leptonica appends "l_" to the type name. This should avoid
just about all conflicts. In the highly unlikely event that it doesn't,
here's a simple way to change the type declarations throughout
the Leptonica code:
(1) customize a file "converttypes.sed" with the following lines:
/l_uint8/s//YOUR_UINT8_NAME/g
/l_int8/s//YOUR_INT8_NAME/g
/l_uint16/s//YOUR_UINT16_NAME/g
/l_int16/s//YOUR_INT16_NAME/g
/l_uint32/s//YOUR_UINT32_NAME/g
/l_int32/s//YOUR_INT32_NAME/g
/l_float32/s//YOUR_FLOAT32_NAME/g
/l_float64/s//YOUR_FLOAT64_NAME/g
(2) in the src and prog directories:
- if you have a version of sed that does in-place conversion:
sed -i -f converttypes.sed *
- else, do something like (in csh)
foreach file (*)
sed -f converttypes.sed $file > tempdir/$file
end

If you are using Leptonica with a large code base that typedefs the
built-in types differently from Leptonica, just edit the typedefs
in environ.h. This should have no side-effects with other libraries,
and no issues should arise with the location in which liblept is
included.

For compatibility with 64 bit hardware and compilers, where
necessary we use the typedefs in stdint.h to specify the pointer
size (either 4 or 8 byte).

4. Compile-time control over stderr output (see environ.h)

Leptonica provides both compile-time and run-time control over
messages and debug output (thanks to Dave Bryan). Both compile-time
and run-time severity thresholds can be set. The run-time threshold
can also be set by an environmental variable. Messages are
vararg-formatted and of 3 types: error, warning, informational.
These are all macros, and can be further suppressed when
NO_CONSOLE_IO is defined on the compile line. For production code
where no output is to go to stderr, compile with -DNO_CONSOLE_IO.

5. In-memory raster format (Pix)

Unlike many other open source packages, Leptonica uses packed
data for images with all bit/pixel (bpp) depths, allowing us
to process pixels in parallel. For example, rasterops works
on all depths with 32-bit parallel operations throughout.
Leptonica is also explicitly configured to work on both little-endian
and big-endian hardware. RGB image pixels are always stored
in 32-bit words, and a few special functions are provided for
scaling and rotation of RGB images that have been optimized by
making explicit assumptions about the location of the R, G and B
components in the 32-bit pixel. In such cases, the restriction
is documented in the function header. The in-memory data structure
used throughout Leptonica to hold the packed data is a Pix,
which is defined and documented in pix.h. The alpha component
in RGB images is significantly better supported, starting in 1.70.

Additionally, a FPix is provided for handling 2D arrays of floats,
and a DPix is provided for 2D arrays of doubles. Converters
between these and the Pix are given.

6. Conversion between Pix and other in-memory raster formats

. If you use Leptonica with other imaging libraries, you will need
functions to convert between the Pix and other image data
structures. To make a Pix from other image data structures, you
will need to understand pixel packing, pixel padding, component
ordering and byte ordering on raster lines. See the file pix.h
for the specification of image data in the pix.

7. Custom memory management

Leptonica allows you to use custom memory management (allocator,
deallocator). For Pix, which tend to be large, the alloc/dealloc
functions can be set programmatically. For all other structs and arrays,
the allocators are specified in environ.h. Default functions
are malloc and free. We have also provided a sample custom
allocator/deallocator for Pix, in pixalloc.c.




What's in leptonica?



1. Rasterops

This is a source for a clean, fast implementation of rasterops.
You can find details starting at the Leptonica home page,
and also by looking directly at the source code.
The low-level code is in roplow.c and ropiplow.c, and an
interface is given in rop.c to the simple Pix image data structure.

2. Binary morphology

This is a source for efficient implementations of binary morphology
Details are found starting at the Leptonica home page, and by reading
the source code.

Binary morphology is implemented two ways:

(a) Successive full image rasterops for arbitrary
structuring elements (Sels)

(b) Destination word accumulation (dwa) for specific Sels.
This code is automatically generated. See, for example,
the code in fmorphgen.1.c and fmorphgenlow.1.c.
These files were generated by running the program
prog/fmorphautogen.c. Results can be checked by comparing dwa
and full image rasterops; e.g., prog/fmorphauto_reg.c.

Method (b) is considerably faster than (a), which is the
reason we've gone to the effort of supporting the use
of this method for all Sels. We also support two different
boundary conditions for erosion.

Similarly, dwa code for the general hit-miss transform can
be auto-generated from an array of hit-miss Sels.
When prog/fhmtautogen.c is compiled and run, it generates
the dwa C code in fhmtgen.1.c and fhmtgenlow.1.c. These
files can then be compiled into the libraries or into other programs.
Results can be checked by comparing dwa and rasterop results;
e.g., prog/fhmtauto_reg.c

Several functions with simple parsers are provided to execute a
sequence of morphological operations (plus binary rank reduction
and replicative expansion). See morphseq.c.

The structuring element is represented by a simple Sel data structure
defined in morph.h. We provide (at least) seven ways to generate
Sels in sel1.c, and several simple methods to generate hit-miss
Sels for pattern finding in selgen.c.

In use, the most common morphological Sels are separable bricks,
of dimension n x m (where either n or m, but not both, is commonly 1).
Accordingly, we provide separable morphological operations on brick
Sels, using for binary both rasterops and dwa. Parsers are provided
for a sequence of separable binary (rasterop and dwa) and grayscale
brick morphological operations, in morphseq.c. The main
advantage in using the parsers is that you don't have to create
and destroy Sels, or do any of the intermediate image bookkeeping.

We also give composable separable brick functions for binary images,
for both rasterop and dwa. These decompose each of the linear
operations into a sequence of two operations at different scales,
reducing the operation count to a sum of decomposition factors,
rather than the (un-decomposed) product of factors.
As always, parsers are provided for a sequence of such operations.

3. Grayscale morphology and rank order filters

We give an efficient implementation of grayscale morphology for brick
Sels. See the Leptonica home page and the source code.

Brick Sels are separable into linear horizontal and vertical elements.
We use the van Herk/Gil-Werman algorithm, that performs the calculations
in a time that is independent of the size of the Sels. Implementations
of tophat and hdome are also given. The low-level code is in graymorphlow.c.

We also provide grayscale rank order filters for brick filters.
The rank order filter is a generalization of grayscale morphology,
that selects the rank-valued pixel (rather than the min or max).
A color rank order filter applies the grayscale rank operation
independently to each of the (r,g,b) components.

4. Image scaling

Leptonica provides many simple and relatively efficient
implementations of image scaling. Some of them are listed here;
for the full set see the web page and the source code.

Grayscale and color images are scaled using:
- sampling
- lowpass filtering followed by sampling,
- area mapping
- linear interpolation

Scaling operations with antialiased sampling, area mapping,
and linear interpolation are limited to 2, 4 and 8 bpp gray,
24 bpp full RGB color, and 2, 4 and 8 bpp colormapped
(bpp == bits/pixel). Scaling operations with simple sampling
can be done at 1, 2, 4, 8, 16 and 32 bpp. Linear interpolation
is slower but gives better results, especially for upsampling.
For moderate downsampling, best results are obtained with area
mapping scaling. With very high downsampling, either area mapping
or antialias sampling (lowpass filter followed by sampling) give
good results. Fast area map with power-of-2 reduction are also
provided. Optional sharpening after resampling is provided to
improve appearance by reducing the visual effect of averaging
across sharp boundaries.

For fast analysis of grayscale and color images, it is useful to
have integer subsampling combined with pixel depth reduction.
RGB color images can thus be converted to low-resolution
grayscale and binary images.

For binary scaling, the dest pixel can be selected from the
closest corresponding source pixel. For the special case of
power-of-2 binary reduction, low-pass rank-order filtering can be
done in advance. Isotropic integer expansion is done by pixel replication.

We also provide 2x, 3x, 4x, 6x, 8x, and 16x scale-to-gray reduction
on binary images, to produce high quality reduced grayscale images.
These are integrated into a scale-to-gray function with arbitrary
reduction.

Conversely, we have special 2x and 4x scale-to-binary expansion
on grayscale images, using linear interpolation on grayscale
raster line buffers followed by either thresholding or dithering.

There are also image depth converters that don't have scaling,
such as unpacking operations from 1 bpp to grayscale, and
thresholding and dithering operations from grayscale to 1, 2 and 4 bpp.

5. Image shear and rotation (and affine, projective, ...)

Image shear is implemented with both rasterops and linear interpolation.
The rasterop implementation is faster and has no constraints on image
depth. We provide horizontal and vertical shearing about an
arbitrary point (really, a line), both in-place and from source to dest.
The interpolated shear is used on 8 bpp and 32 bpp images, and
gives a smoother result. Shear is used for the fastest implementations
of rotation.

There are three different types of general image rotators:

a. Grayscale rotation using area mapping
- pixRotateAM() for 8 bit gray and 24 bit color, about center
- pixRotateAMCorner() for 8 bit gray, about image UL corner
- pixRotateAMColorFast() for faster 24 bit color, about center

b. Rotation of an image of arbitrary bit depth, using
either 2 or 3 shears. These rotations can be done
about an arbitrary point, and they can be either
from source to dest or in-place; e.g.
- pixRotateShear()
- pixRotateShearIP()

c. Rotation by sampling. This can be used on images of arbitrary
depth, and done about an arbitrary point. Colormaps are retained.

The area mapping rotations are slower and more accurate,
because each new pixel is composed using an average of four
neighboring pixels in the original image; this is sometimes
also called "antialiasing". Very fast color area mapping
rotation is provided. The low-level code is in rotateamlow.c.

The shear rotations are much faster, and work on images
of arbitrary pixel depth, but they just move pixels
around without doing any averaging. The pixRotateShearIP()
operates on the image in-place.

We also provide orthogonal rotators (90, 180, 270 degree; left-right
flip and top-bottom flip) for arbitrary image depth.
And we provide implementations of affine, projective and bilinear
transforms, with both sampling (for speed) and interpolation
(for antialiasing).

6. Sequential algorithms

We provide a number of fast sequential algorithms, including
binary and grayscale seedfill, and the distance function for
a binary image. The most efficient binary seedfill is
pixSeedfill(), which uses Luc Vincent's algorithm to iterate
raster- and antiraster-ordered propagation, and can be used
for either 4- or 8-connected fills. Similar raster/antiraster
sequential algorithms are used to generate a distance map from
a binary image, and for grayscale seedfill. We also use Heckbert's
stack-based filling algorithm for identifying 4- and 8-connected
components in a binary image. A fast implementation of the
watershed transform, using a priority queue, is included.

7. Image enhancement

A few simple image enhancement routines for grayscale and
color images have been provided. These include intensity mapping
with gamma correction and contrast enhancement, as well as edge
sharpening, smoothing, and hue and saturation modification.

8. Convolution and cousins

A number of standard image processing operations are also
included, such as block convolution, binary block rank filtering,
grayscale and rgb rank order filtering, and edge and local
minimum/maximum extraction. Generic convolution is included,
for both separable and non-separable kernels, using float arrays
in the Pix. Two implementations are included for grayscale and
color bilateral filtering: a straightforward (slow) one, and a
fast, approximate, separable one.

9. Image I/O

Some facilities have been provided for image input and output.
This is of course required to build executables that handle images,
and many examples of such programs, most of which are for
testing, can be built in the prog directory. Functions have been
provided to allow reading and writing of files in JPEG, PNG,
TIFF, BMP, PNM ,GIF, WEBP and JP2 formats. These formats were chosen
for the following reasons:

- JFIF JPEG is the standard method for lossy compression
of grayscale and color images. It is supported natively
in all browsers, and uses a good open source compression
library. Decompression is supported by the rasterizers
in PS and PDF, for level 2 and above. It has a progressive
mode that compresses about 10% better than standard, but
is considerably slower to decompress. See jpegio.c.

- PNG is the standard method for lossless compression
of binary, grayscale and color images. It is supported
natively in all browsers, and uses a good open source
compression library (zlib). It is superior in almost every
respect to GIF (which, until recently, contained proprietary
LZW compression). See pngio.c.

- TIFF is a common interchange format, which supports different
depths, colormaps, etc., and also has a relatively good and
widely used binary compression format (CCITT Group 4).
Decompression of G4 is supported by rasterizers in PS and PDF,
level 2 and above. G4 compresses better than PNG for most
text and line art images, but it does quite poorly for halftones.
It has good and stable support by Leffler's open source library,
which is clean and small. Tiff also supports multipage
images through a directory structure. See tiffio.c

- BMP has (until recently) had no compression. It is a simple
format with colormaps that requires no external libraries.
It is commonly used because it is a Microsoft standard,
but has little besides simplicity to recommend it. See bmpio.c.

- PNM is a very simple, old format that still has surprisingly
wide use in the image processing community. It does not
support compression or colormaps, but it does support binary,
grayscale and rgb images. Like BMP, the implementation
is simple and requires no external libraries. See pnmio.c.

- WEBP is a new wavelet encoding method derived from libvpx,
a video compression library. It is rapidly growing in acceptance,
and is supported natively in several browsers. Leptonica provides
an interface through webp into the underlying codec. You need
to download libwebp.

- JP2K (jpeg2000) is a wavelet encoding method, that has clear
advantages over jpeg in compression and quality (especially when
the image has sharp edges, such as scanned documents), but is
only slowly growing in acceptance. For it to be widely supported,
it will require support on a major browser (as with webp).
Leptonica provides an interface through openjpeg into the underlying
codec. You need to download libopenjp2, version 2.X.

- GIF is still widely used in the world. With the expiration
of the LZW patent, it is practical to add support for GIF files.
The open source gif library is relatively incomplete and
unsupported (because of the Sperry-Rand-Burroughs-Univac
patent history). See gifio.c.

Here's a summary of compression support and limitations:
- All formats except JPEG, WEBP and JP2K support 1 bpp binary.
- All formats support 8 bpp grayscale (GIF must have a colormap).
- All formats except GIF support rgb color.
- All formats except PNM, JPEG, WEBP and JP2K support 8 bpp colormap.
- PNG and PNM support 2 and 4 bpp images.
- PNG supports 2 and 4 bpp colormap, and 16 bpp without colormap.
- PNG, JPEG, TIFF, WEBP, JP2K and GIF support image compression;
PNM and BMP do not.
- WEBP supports rgb color and rgba.
- JP2K supports 8 bpp grayscale, rgb color and rgba.
Use prog/ioformats_reg for a regression test on all formats, including
thorough testing on TIFF.
For more thorough testing on other formats, use:
- prog/pngio_reg for PNG.
- prog/gifio_reg for GIF
- prog/webpio_reg for WEBP
- prog/jp2kio_reg for JP2K

We provide generators for PS output, from all types of input images.
The output can be either uncompressed or compressed with level 2
(ccittg4 or dct) or level 3 (flate) encoding. You have flexibility
for scaling and placing of images, and for printing at different
resolutions. You can also compose mixed raster (text, image) PS.
See psio1.c for examples of how to output PS for different applications.
As examples of usage, see:
* prog/converttops.c for a general image --> PS conversion
for printing. You can specify compression level (1, 2, or 3).
* prog/convertfilestops.c to generate a multipage level 3 compressed
PS file that can then be converted to pdf with ps2pdf.
* prog/convertsegfilestops.c to generate a multipage, mixed raster,
level 2 compressed PS file.

We provide generators for PDF output, again from all types of input
images, and with ccittg4, dct, flate and jpx (jpeg2000) compression.
You can do the following for PDF:
* Put any number of images onto a page, with specified input
resolution, location and compression.
* Write a mixed raster PDF, given an input image and a segmentation
mask. Non-image regions are written in G4 (fax) encoding.
* Concatenate single-page PDF wrapped images into a single PDF file.
* Build a PDF file of all images in a directory or array of file names.
As examples of usage, see:
* prog/converttopdf.c: fast pdf generation with one image/page.
For speed, this avoids transcoding whenever possible.
* prog/convertfilestopdf.c: more flexibility in the output. You
can set the resolution, scaling, encoding type and jpeg quality.
* prog/convertsegfilestopdf.c: generates a multipage, mixed raster pdf,
with separate controls for compressing text and non-text regions.

Note: any or all of these I/O library calls can be stubbed out at
compile time, using the environment variables in environ.h.

For all formatted reads and writes, we support read from memory
and write to memory. (We cheat with gif, using a file intermediary.)
For all formats except for TIFF, these memory I/O functions
are supported through open_memstream() and fmemopen(),
which only is available with the gnu C runtime library (glibc).
Therefore, except for TIFF, you will not be able to do memory
supported read/writes on these platforms:
OSX, Windows, Solaris
To enable/disable memory I/O for image read/write, see environ.h.

We also provide fast serialization and deserialization between a pix
in memory and a file (spixio.c). This works on all types of pix images.

10. Colormap removal and color quantization

Leptonica provides functions that remove colormaps, for conversion
to either 8 bpp gray or 24 bpp RGB. It also provides the inverse
function to colormap removal; namely, color quantization
from 24 bpp full color to 8 bpp colormap with some number
of colormap colors. Several versions are provided, some that
use a fast octree vector quantizer and others that use
a variation of the median cut quantizer. For high-level interfaces,
see for example: pixConvertRGBToColormap(), pixOctreeColorQuant(),
pixOctreeQuantByPopulation(), pixFixedOctcubeQuant256(),
and pixMedianCutQuant().

11. Programmatic image display

For debugging, several pixDisplay* functions in writefile.c are given.
Two (pixDisplay and pixDisplayWithTitle) can be called to display
an image using one of several display programs (xzgv, xli, xv, l_view).
If necessary to fit on the screen, the image is reduced in size,
with 1 bpp images being converted to grayscale for readability.
(This is much better than letting xv do the reduction, for example).
Another function, pixDisplayWrite(), writes images to disk under
control of a reduction/disable flag, which then allows
either viewing with pixDisplayMultiple(), or the generation
of a composite image using, for example, pixaDisplayTiledAndScaled().
These files can also be gathered up into a compressed PDF or PostScript
file and viewed with evince. Common image display programs are: xzgv,
xli, xv, display, gthumb, gqview, evince, gv and acroread. Finally,
a set of images can be saved into a pixa (array of pix), specifying the
eventual layout into a single pix, using pixaDisplay*().

12. Document image analysis

Many functions have been included specifically to help with
document image analysis. These include skew and text orientation
detection; page segmentation; baseline finding for text;
unsupervised classification of connected components, characters
and words; dewarping camera images; adaptive binarization; and
a simple book-adaptive classifier for various character sets,
segmentation for newspaper articles, etc.

13. Data structures

Several simple data structures are provided for safe and efficient handling
of arrays of numbers, strings, pointers, and bytes. The generic
pointer array is implemented in four ways: as a stack, a queue,
a heap (used to implement a priority queue), and an array with
insertion and deletion, from which the stack operations form a subset.
Byte arrays are implemented both as a wrapper around the actual
array and as a queue. The string arrays are particularly useful
for both parsing and composing text. Generic lists with
doubly-linked cons cells are also provided.

14. Examples of programs that are easily built using the library:

- for plotting x-y data, we give a programmatic interface
to the gnuplot program, with output to X11, png, ps or eps.
We also allow serialization of the plot data, in a form
such that the data can be read, the commands generated,
and (finally) the plot constructed by running gnuplot.

- a simple jbig2-type classifier, using various distance
metrics between image components (correlation, rank
hausdorff); see prog/jbcorrelation.c, prog/jbrankhaus.c.

- a simple color segmenter, giving a smoothed image
with a small number of the most significant colors.

- a program for converting all images in a directory
to a PostScript file, and a program for printing an image
in any (supported) format to a PostScript printer.

- various programs for generating pdf files from compressed
images, including very fast ones that don't scale and
avoid transcoding if possible.

- converters between binary images and SVG format.

- an adaptive recognition utility for training and identifying
text characters in a multipage document such as a book.

- a bitmap font facility that allows painting text onto
images. We currently support one font in several sizes.
The font images and postscript programs for generating
them are stored in prog/fonts/, and also as compiled strings
in bmfdata.h.

- a binary maze game lets you generate mazes and find shortest
paths between two arbitrary points, if such a path exists.
You can also compute the "shortest" (i.e., least cost) path
between points on a grayscale image.

- a 1D barcode reader. This is still in an early stage of development,
with little testing, and it only decodes 6 formats.

- a utility that will dewarp images of text that were captured
with a camera at close range.

- a sudoku solver, including a pretty good test for uniqueness

- see (13, above) for other document image applications.

15. JBig2 encoder

Leptonica supports an open source jbig2 encoder (yes, there is one!),
which can be downloaded from:
http://www.imperialviolet.org/jbig2.html.
To build the encoder, use the most recent version. This bundles
Leptonica 1.63. Once you've built the encoder, use it to compress
a set of input image files: (e.g.)
./jbig2 -v -s >
You can also generate a pdf wrapping for the output jbig2. To do that,
call jbig2 with the -p arg, which generates a symbol file (output.sym)
plus a set of location files for each input image (output.0000, ...):
./jbig2 -v -s -p
and then generate the pdf:
python pdf.py output >
See the usage documentation for the jbig2 compressor at:
http://www.imperialviolet.org/binary/jbig2enc.html
You can uncompress the jbig2 files using jbig2dec, which can be
downloaded and built from:
http://jbig2dec.sourceforge.net/

16. Versions

New versions of the Leptonica library are released several times
a year, and version numbers are provided for each release in
the makefile and in allheaders.h. All even versions from 1.42 to 1.60
have been archived at http://code.google.com/p/leptonica, as well as all
versions after 1.60. However, code.google.com no longer supports
uploads of new distributions, which you can get at the leptonica.org
web site.

The number of downloads of leptonica increased by nearly an order
of magnitude with 1.69, due to bundling with tesseract and
incorporation in ubuntu 12-04. Leptonica has about 2400 functions,
and the binary API changed slightly with the new 1.71 release. Having
a proper binary release version is required for all debian packages.
The binary release versions are:
1.69 : 3.0.0
1.70 : 4.0.0
1.71 : 4.2.0
1.72 : 4.3.0
1.73 : 4.4.0

A brief version chronology is maintained in version-notes.html.
Starting with gcc 4.3.3, error warnings (-Werror) are given for
minor infractions like not checking return values of built-in C
functions. I have attempted to eliminate these warnings.
In any event, you will see warnings with the -Wall flag.